SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
What Musk is doing is tantamount to hacking the inner core of the federal government and the public trust—a blatant coup and power grab for technocratic ends.
It’s hard to see articles about the “move fast and break things” approach of the Trump administration without also hearing about the hovering presence the world’s richest man, technocrat extraordinaire Elon Musk. The mainstream media likes to describe Musk primarily as an oligarch. His involvement—which now includes having a desk in the White House—is a rather alarming event and something hardly anyone expected. Unfortunately, most media reports are lacking an important perspective about this unexpected bestowal of political power to him and other technocratic oligarchs. Is this a deliberate omission or do many media outlets simply have blinders on because, in their perception, Big Tech is now fundamental to Wall Street’s economy and national security?
Musk is a true technocrat and represents the forefront of a new technocratic form of government that we are hurtling toward at light speed. However, the notion of technocratic governance is simply not on the radar screen of the MSM, various political think tanks, and Congress. In the case of the media, journalists often appear to be enmeshed in worldviews more appropriate to the late 90s than the complex and often baffling world picture we see today. Many articles about Musk focus on such issues as the legality of the Department of Government Efficiency( DOGE) and the serious conflicts of interest that exist. Then, of course, there’s the sheer insanity of handing over the keys to the kingdom to a small group of computer tech bros inexperienced in matters of state who appear to have not been properly vetted or advised of existing privacy law and national security protocols. The idea that these individuals now have access to troves of the personal data of U.S. citizens is simply beyond comprehension. Still, while these are legitimate concerns, the larger implications for technocratic management are getting bypassed.
The first step toward counteracting these trends would be to better educate both Congress and the public about the still poorly understood dangers of a technocratic state which heralds further fusion of corporate and government power.
The advent of the technocratic state poses a clear and present threat to democratic norms. But in the early days of his presidency, Donald Trump has opened the door wide open to its instantiation, first with the public announcement of a $500-billion joint AI development effort with Oracle CEO Larry Ellison and AI frontman Sam Altman accompanying him on stage. I’ve written previously about the lack of technological sophistication possessed by the average member of Congress and how this is a deep concern. This knowledge gap creates a power vacuum that’s being fully taken advantage of by wealthy and powerful unelected technocrats who are at the forefront of accelerationist-style AI development.
Is there anything that can stop this runaway freight train from running over the needs and rights of the public and constitutional norms? We’re all now highly dependent on phones and computing devices to carry out even the simplest of tasks in the course of everyday life. This life-limiting technological dependency represents a fundamental means of shifting power and control to elites who have the tech-based sophistication and infrastructure to leverage that control for their own advantage, facilitating a behind-the-scenes transfer of money and power up the food chain.
To think that Musk is motivated to “help out” with this internal nation-building would be naïve. As Anna Weiner wrote in a recent New Yorker article, “Tech executives see an opportunity to shape the world in their image.” Musk became the world’s richest individual only through a laser-like focus on self-interest and various questionable vanity projects. What’s also concerning is that this power shift toward a technocratic state is happening merely in the first few months of Trump’s presidency. Was this the president’s Reaganesque answer to making things more affordable or is it a cynical bypass of those campaign promises?
I’m not going to say that AI isn’t interesting and doesn’t have has great potential for positive change, as do many digital technologies—in theory at least. But we’ve already squandered opportunities to shape the internet as a force for social good with Big Tech moving to hijack its capabilities for marketing, advertising, social control, and even psychological manipulation. It’s more than a small concern that AI will follow a similar trajectory. Have we seen many announcements to date where AI will be used to solve global macro-problems such the climate crisis, wealth inequality, poverty, or automation’s negative effects on job markets? More likely, it will only exacerbate these problems. For example, AI’s insatiable need for electric power has been a key factor in the triumphant rebranding of nuclear power as a “green” technology. The most salient example of this is Microsoft’s intent to use the Three Mile Island nuclear plant to power its AI farms. As for wealth inequality, it seems clear that AI is already widening the divide between the economic classes. And, domestically and no doubt also in China and Russia, one of the most prominent uses of AI has been to provide new capabilities for drone attacks and nuclear warfare.
The first step toward counteracting these trends would be to better educate both Congress and the public about the still poorly understood dangers of a technocratic state which heralds further fusion of corporate and government power (historically, a hallmark of authoritarianism). In a way, this is a nonpartisan issue because Democrats have made their own contribution to cozying up to Big Tech’s plans for our future over the years. One possible small step might be for Congress to re-fund the Office of Technology Assessment. While this is hardly a panacea, providing more tech savvy advice to Congress would be a move in the right direction and might serve to balance the advisory data provided to the White House by the Office of Science and Technology Policy (OSTP). We have yet to hear of anyone in Congress, Democrat or Republican, stepping up to warn about the dangers of technocracy, not just as a political phenomenon but also as a social and quality of life issue. Most likely, both high-profile media outlets and Congress are sidestepping this issue with a kind of strategic incompetence in order to support the powerful economic interests represented by their Big Tech donors.
It’s time to sound the alarm. What Musk is doing is tantamount to hacking the inner core of the federal government and the public trust—a blatant coup and power grab for technocratic ends. Yes, there is a definite case to be made for rooting out government waste, abuse, and corruption. But there’s a better and more measured way to proceed. Finally, it’s worth asking if Donald Trump fully understands the constitutional implications of opening this Pandora’s box. In terms of existing guardrails, he either turned Musk loose knowingly or unknowingly. But it doesn’t matter—both scenarios are equally troubling. Regardless of the outcome of pending and future court cases, we should all be forewarned that 2025 is rapidly shaping up to be the year we lost our civil liberties and protections (and our country as we know it) to AI and the Technocrat-in-Chief, Elon Musk.
"Google will probably now work on deploying technology directly that can kill people," said one former ethical AI staffer at the tech giant.
Weeks into U.S. President Donald Trump's second term, Google on Tuesday removed from its Responsible AI principles a commitment to not use artificial intelligence to develop technologies that could cause "overall harm," including weapons and surveillance—walking back a pledge that employees pushed for seven years ago as they reminded the company of its motto at the time: "Don't be evil."
That maxim was deleted from the company's code of conduct shortly after thousands of employees demanded Google end its collaboration with the Pentagon on potential drone technology in 2018, and this week officials at the Silicon Valley giant announced they can no longer promise they'll refraining from AI weapons development.
James Manyika, senior vice president for research, technology, and society, and Demis Hassabis, CEO of the company's AI research lab DeepMind, wrote in a blog post on progress in "Responsible AI" that in "an increasingly complex geopolitical landscape... democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights."
"And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," they said.
Until Tuesday, Google pledged that "applications we will not pursue" with AI included weapons, surveillance, technologies that "cause or are likely to cause overall harm," and uses that violate international law and human rights.
"Is this as terrifying as it sounds?" asked one journalist and author as the mention of those applications disappeared from the campany's AI Principles page, where it had been included as recently as last week.
Margaret Mitchell, who previously co-led Google's ethical AI team, toldBloomberg that the removal of the principles "is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people."
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public."
The company's updated AI Principles page says it will implement "appropriate human oversight" to align its work with "widely accepted principles of international law and human rights" and that it will use testing and monitoring "to mitigate unintended or harmful outcomes and avoid unfair bias."
But with Google aligning itself with the Trump administration, human rights advocate Sarah Leah Whitson of Democracy for the Arab World Now called the company a "corporate war machine" following Tuesday's announcement.
Google donated $1 million to his inaugural committee along with other tech giants and sent CEO Sundar Pichai to Trump's inauguration, where he sat next to the president's top ally in the industry, Elon Musk.
Since Trump won the election in November, tech companies have also distanced themselves from previous pledges to strive for diversity, equity, and inclusion in their hiring and workplace practices, as Trump has directly targeted DEI programs in the federal government.
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public," Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA, toldWired on Tuesday.
At Google, said Koul, there is still "long-standing employee sentiment that the company should not be in the business of war."
All told, 92 million low-income people in the United States—those with incomes less than 200% of the federal poverty line—have some key aspect of life decided by AI.
The billions of dollars poured into artificial intelligence, or AI, haven’t delivered on the technology’s promised revolutions, such as better medical treatment, advances in scientific research, or increased worker productivity.
So, the AI hype train purveys the underwhelming: slightly smarter phones, text-prompted graphics, and quicker report-writing (if the AI hasn’t made things up). Meanwhile, there’s a dark underside to the technology that goes unmentioned by AI’s carnival barkers—the widespread harm that AI presently causes low-income people.
AI and related technologies are used by governments, employers, landlords, banks, educators, and law enforcement to wrongly cut in-home caregiving services for disabled people; accuse unemployed workers of fraud; deny people housing, employment, or credit; take kids from loving parents and put them in foster care; intensify domestic violence and sexual abuse or harassment; label and mistreat middle- and high-school kids as likely dropouts or criminals; and falsely accuse Black and brown people of crimes.
With additional support from philanthropy and civil society, low-income communities and their advocates can better resist the immediate harms and build political power needed to achieve long-term protection against the ravages of AI.
All told, 92 million low-income people in the United States—those with incomes less than 200% of the federal poverty line—have some key aspect of life decided by AI, according to a new report by TechTonic Justice. This shift towards AI decision-making carries risks not present in the human-centered methods that precede them and defies all existing accountability mechanisms.
First, AI expands the scale of risk far beyond individual decision-makers. Sure, humans can make mistakes or be biased. But their reach is limited to the people they directly make decisions about. In cases of landlords, direct supervisors, or government caseworkers, that might top out at a few hundred people. But with AI, the risks of misapplied policies, coding errors, bias, or cruelty are centralized through the system and applied to masses of people ranging from several thousand to millions at a time.
Second, the use of AI and the reasons for its decisions are not easily known by the people subject to them. Government agencies and businesses often have no obligation to affirmatively disclose that they are using AI. And even if they do, they might not divulge the key information needed to understand how the systems work.
Third, the supposed sophistication of AI lends a cloak of rationality to policy decisions that are hostile to low-income people. This paves the way for further implementation of bad policy for these communities. Benefit cuts, such as those to in-home care services that I fought against for disabled people, are masked as objective determinations of need. Or workplace management and surveillance systems that undermine employee stability and safety pass as tools to maximize productivity. To invoke the proverb, AI wolves use sheep avatars.
The scale, opacity, and costuming of AI make harmful decisions difficult to fight on an individual level. How can you prove that AI was wrong if you don’t even know that it is being used or how it works? And, even if you do, will it matter when the AI’s decision is backed up by claims of statistical sophistication and validity, no matter how dubious?
On a broader level, existing accountability mechanisms don’t rein in harmful AI. AI-related scandals in public benefit systems haven’t turned into political liabilities for the governors in charge of failing Medicaid or Unemployment Insurance systems in Texas and Florida, for example. And the agency officials directly implementing such systems are often protected by the elected officials whose agendas they are executing.
Nor does the market discipline wayward AI uses against low-income people. One major developer of eligibility systems for state Medicaid programs has secured $6 billion in contracts even though its systems have failed in similar ways in multiple states. Likewise, a large data broker had no problem winning contracts with the federal government even after a security breach divulged the personal information of nearly 150 million Americans.
Existing laws similarly fall short. Without any meaningful AI-specific legislation, people must apply existing legal claims to the technology. Usually based on anti-discrimination laws or procedural requirements like getting adequate explanations for decisions, these claims are often available only after the harm has happened and offer limited relief. While such lawsuits have had some success, they alone are not the answer. After all, lawsuits are expensive; low-income people can’t afford attorneys; and quality, no-cost representation available through legal aid programs may not be able to meet the demand.
Right now, unaccountable AI systems make unchallengeable decisions about low-income people at unfathomable scales. Federal policymakers won’t make things better. The Trump administration quickly rescinded protective AI guidance that former U.S. President Joe Biden issued. And, with President Donald Trump and Congress favoring industry interests, short-term legislative fixes are unlikely.
Still, that doesn’t mean all hope is lost. Community-based resistance has long fueled social change. With additional support from philanthropy and civil society, low-income communities and their advocates can better resist the immediate harms and build political power needed to achieve long-term protection against the ravages of AI.
Organizations like mine, TechTonic Justice, will empower these frontline communities and advocates with battle-tested strategies that incorporate litigation, organizing, public education, narrative advocacy, and other dimensions of change-making. In the end, fighting from the ground up is our best hope to take AI-related injustice down.