SUBSCRIBE TO OUR FREE NEWSLETTER

SUBSCRIBE TO OUR FREE NEWSLETTER

Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

* indicates required
5
#000000
#FFFFFF
artificial intelligence

A visiualization symbolizes artificial intelligence.

(Photo: Monsitj/iStock via Getty Images).

3 Reasons Using AI in Decision-Making Harms Low-Income Americans

All told, 92 million low-income people in the United States—those with incomes less than 200% of the federal poverty line—have some key aspect of life decided by AI.

The billions of dollars poured into artificial intelligence, or AI, haven’t delivered on the technology’s promised revolutions, such as better medical treatment, advances in scientific research, or increased worker productivity.

So, the AI hype train purveys the underwhelming: slightly smarter phones, text-prompted graphics, and quicker report-writing (if the AI hasn’t made things up). Meanwhile, there’s a dark underside to the technology that goes unmentioned by AI’s carnival barkers—the widespread harm that AI presently causes low-income people.

AI and related technologies are used by governments, employers, landlords, banks, educators, and law enforcement to wrongly cut in-home caregiving services for disabled people; accuse unemployed workers of fraud; deny people housing, employment, or credit; take kids from loving parents and put them in foster care; intensify domestic violence and sexual abuse or harassment; label and mistreat middle- and high-school kids as likely dropouts or criminals; and falsely accuse Black and brown people of crimes.

With additional support from philanthropy and civil society, low-income communities and their advocates can better resist the immediate harms and build political power needed to achieve long-term protection against the ravages of AI.

All told, 92 million low-income people in the United States—those with incomes less than 200% of the federal poverty line—have some key aspect of life decided by AI, according to a new report by TechTonic Justice. This shift towards AI decision-making carries risks not present in the human-centered methods that precede them and defies all existing accountability mechanisms.

First, AI expands the scale of risk far beyond individual decision-makers. Sure, humans can make mistakes or be biased. But their reach is limited to the people they directly make decisions about. In cases of landlords, direct supervisors, or government caseworkers, that might top out at a few hundred people. But with AI, the risks of misapplied policies, coding errors, bias, or cruelty are centralized through the system and applied to masses of people ranging from several thousand to millions at a time.

Second, the use of AI and the reasons for its decisions are not easily known by the people subject to them. Government agencies and businesses often have no obligation to affirmatively disclose that they are using AI. And even if they do, they might not divulge the key information needed to understand how the systems work.

Third, the supposed sophistication of AI lends a cloak of rationality to policy decisions that are hostile to low-income people. This paves the way for further implementation of bad policy for these communities. Benefit cuts, such as those to in-home care services that I fought against for disabled people, are masked as objective determinations of need. Or workplace management and surveillance systems that undermine employee stability and safety pass as tools to maximize productivity. To invoke the proverb, AI wolves use sheep avatars.

The scale, opacity, and costuming of AI make harmful decisions difficult to fight on an individual level. How can you prove that AI was wrong if you don’t even know that it is being used or how it works? And, even if you do, will it matter when the AI’s decision is backed up by claims of statistical sophistication and validity, no matter how dubious?

On a broader level, existing accountability mechanisms don’t rein in harmful AI. AI-related scandals in public benefit systems haven’t turned into political liabilities for the governors in charge of failing Medicaid or Unemployment Insurance systems in Texas and Florida, for example. And the agency officials directly implementing such systems are often protected by the elected officials whose agendas they are executing.

Nor does the market discipline wayward AI uses against low-income people. One major developer of eligibility systems for state Medicaid programs has secured $6 billion in contracts even though its systems have failed in similar ways in multiple states. Likewise, a large data broker had no problem winning contracts with the federal government even after a security breach divulged the personal information of nearly 150 million Americans.

Existing laws similarly fall short. Without any meaningful AI-specific legislation, people must apply existing legal claims to the technology. Usually based on anti-discrimination laws or procedural requirements like getting adequate explanations for decisions, these claims are often available only after the harm has happened and offer limited relief. While such lawsuits have had some success, they alone are not the answer. After all, lawsuits are expensive; low-income people can’t afford attorneys; and quality, no-cost representation available through legal aid programs may not be able to meet the demand.

Right now, unaccountable AI systems make unchallengeable decisions about low-income people at unfathomable scales. Federal policymakers won’t make things better. The Trump administration quickly rescinded protective AI guidance that former U.S. President Joe Biden issued. And, with President Donald Trump and Congress favoring industry interests, short-term legislative fixes are unlikely.

Still, that doesn’t mean all hope is lost. Community-based resistance has long fueled social change. With additional support from philanthropy and civil society, low-income communities and their advocates can better resist the immediate harms and build political power needed to achieve long-term protection against the ravages of AI.

Organizations like mine, TechTonic Justice, will empower these frontline communities and advocates with battle-tested strategies that incorporate litigation, organizing, public education, narrative advocacy, and other dimensions of change-making. In the end, fighting from the ground up is our best hope to take AI-related injustice down.

This work is licensed under a Creative Commons Attribution-Share Alike 3.0 License.