How to bridge the AI divide for equitable SNAP access
The role of government equity guidelines, AI literacy, and community collaboration
Image source: Brookings (2021)
Curiosity and intrigue rightfully jump out from each paragraph of the recent White House executive order on the safe, secure, and trustworthy development and use of artificial intelligence (AI). The 19,700-word order powerfully posits the responsible governance and ethical use of AI in government services, including public benefits like the Supplemental Nutrition Assistance Program (SNAP). Supporting around 41 million people annually, SNAP has lifted 3.7 million children out of poverty.
With AI's potential to streamline access to benefits, the U.S. faces a crisis: worsening food insecurity, with 17 million households currently affected, far exceeding the rates documented in 2021 and 2022 of 13.5 million and 13.8 million, respectively.
Drastic changes in SNAP eligibility criteria and the expiration of pandemic nutrition benefits exacerbate this issue, potentially cutting off 750,000 people from food assistance and increasing poverty, especially among people of color.
Despite the promise of AI in improving SNAP access, skepticism remains. A key concern is the AI divide, where certain groups have less AI exposure and understanding, leading to potential exclusion from participating in AI ecosystems like risk scores or automated SNAP screening. Their needs, priorities, and voices could be absent in the subsequent training and evolution of AI models, thus perpetuating a cycle of discrimination and exclusion.
This gap raises the question of how can local and state governments bridge this AI divide to equitably provide communities access to SNAP benefits amidst historical and contemporary racial and wealth disparities.
Equity oversight and regulation
Establishing guidelines to measure and benchmark equity in AI models is essential for governments to close this divide. These guardrails will inform how AI fits into the broader digital discrimination conversation for SNAP modernization. This is particularly significant in applying AI to SNAP screening guidelines, such as young people considered too young to be head of household for SNAP benefits.
The equity issue in AI lies in who has access to the technology, who is represented, and how they are represented in the foundational data sets that AI is developed upon.
For example, Georgia is seeking federal approval to use AI to reduce SNAP application backlogs and speed up the application and decision process that typically takes weeks or months. In contrast, California's report on AI's risks underscores the importance of human oversight in AI decisions, advocating for a balanced approach that combines AI efficiency with human judgment to ensure fairness and prevent exclusion in SNAP eligibility determinations.
Localizing AI to be community-driven
Cities and states are at the forefront of AI governance and policy implementation, especially in addressing immediate needs like food. This local approach is evident in the USDA's AI inventory, which includes 39 projects that foster state and community partnerships for public services like the Nutrition Education & Local Access Dashboard.
Involving input from SNAP beneficiaries and advocacy groups in AI design and evaluation is crucial for accurate national data representation. This collaborative approach helps communities shape AI systems and ensure the technology aligns with their needs and values. An example is Cornell University's research influencing more equitable SNAP information delivery to Spanish speakers in California after identifying and correcting biases in existing algorithms (Google Ads charged significantly more to deliver online ads to Spanish-speaking people about SNAP benefits).
Partnerships between the private sector, academia, and community groups to reduce the AI divide are also notable. Initiatives like AI for Social Good illustrate this, where multidisciplinary teams developed a new algorithm that improves accuracy in reducing mispayment and unfair gains and increasing fairness in distributing SNAP benefits. Engaging community groups in regular audits and bias testing of AI algorithms using diverse datasets enhances transparency in SNAP, benefits automation, and reduces access inequities.
AI literacy and skill-building
To help Americans effectively access SNAP, they need AI literacy. Digital equity practitioners from educational institutions, workplaces, and nonprofits involved in SNAP services need trusted resources to integrate AI education into their digital skills curricula. This approach ensures that communities are prepared to use AI for various purposes, including economic opportunities and educational advancements, and to successfully integrate AI into their digital skills training and ongoing digital equity conversations.
AI literacy workshops, specifically designed for SNAP beneficiaries, are essential to help them navigate AI-enabled applications and identify potential inequities. AI readiness indices for government agencies can assess their workforce's ability to implement AI in their operations.
Training communities in AI applications and responsible development is vital, especially for those affected by SNAP changes. When used equitably, AI can alleviate social inequities and expand human capabilities, as evidenced by AI chatbots improving customer service.
Perhaps the question surrounding AI in improving access to SNAP benefits in America these days is not so much about whether things will get worse, but how much better they can get with appropriate government-led equity guidelines and community collaboration.