

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"Demanding security guardrails for how AI is used by the Department of Defense isn't radical—it's protecting the constitutional rights of the American people," said New Jersey's Democratic governor.
US President Donald Trump "is throwing this tantrum and calling Anthropic 'radical left' because they refuse to have their AI be used for illegal mass surveillance and murder. That's literally it."
That's how progressive commentator Kyle Kulinski described Trump's Friday social media post "directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use" of the artificial intelligence firm's technology—including its chatbot Claude.
As Kulinski's podcast co-host and wife Krystal Ball summarized, "According to the president, objecting to autonomous killer robots and mass surveillance is 'radical left.'"
Earlier this week, Defense Secretary Pete Hegseth gave Anthropic until 5:01 pm Eastern time Friday to agree to let the Pentagon use the company's AI tech however it wants. He threatened to declare Anthropic a "supply chain risk," effectively blacklisting it for military use and ending its current contract, or invoke the Defense Production Act, which would force the company to tailor the product to the Department of Defense's (DOD) needs.
After the DOD reportedly sent Anthropic its "best and final" offer Wednesday night, the company's CEO, Dario Amodei, published a blog post explaining that "we cannot in good conscience accede to their request," and reiterated opposition to enabling autonomous weapons or surveillance of US citizens.
While Anthropic employees, other tech experts, and critics of the current administration praised Amodei for "standing on principle" and choosing "war with the Department of War"—the president's preferred name for the Pentagon—Trump predictably lashed out at the company on his Truth Social platform.
"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military," Trump wrote Friday afternoon.
"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," he continued. "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY."
Directing agencies to stop using Anthropic's tech, Trump added:
We don't need it, we don't want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic's products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.
WE will decide the fate of our Country—NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!
Amodei had notably written in his blog post that "our strong preference is to continue to serve the department and our warfighters—with our two requested safeguards in place. Should the department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
While Trump's order preceded Hegseth's initial deadline, the defense secretary publicly weighed in at 5:14 pm, writing on Elon Musk's social media network X that "this week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States government or the Pentagon."
Hegseth described the company's terms of service as "defective altruism," and reiterated the Pentagon's position that "the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the republic."
The Pentagon chief also officially directed the DOD to designate the company a supply chain risk to national security, meaning that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
"Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service," Hegseth added. "America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final."
The New York Times noted that "the Pentagon is ready to move forward with Grok, produced by Elon Musk's xAI, on its classified system. But Grok is considered by current and former government officials to be an inferior product. And switching AI software would take time and almost certainly cause disruption."
While Anthropic hasn't publicly responded to Trump or Hegseth, critics, including congressional Democrats, have continued to praise the company and blast the administration for how they've each handled the conflict this week.
"Anthropic objected in part to the Department of Defense using its AI technology to engage in domestic mass surveillance. Do you agree that's a radical left, woke position?" asked Congressman Ted Lieu (D-Calif.). "That's actually the constitutional position, one that should be embraced by Americans regardless of party."
Replying to Trump's post specifically, Democratic New Jersey Gov. Mikie Sherrill similarly said: "Yet another alarming attack by the president on a private company defending its principles. Standing up against mass surveillance and demanding security guardrails for how AI is used by the Department of Defense isn't radical—it's protecting the constitutional rights of the American people."
Describing himself as "one of Congress' most vocal proponents for the modernization" of DOD and US intelligence community (IC) missions with transformative technology, Senate Select Committee on Intelligence Vice Chair Mark R. Warner (D-Va.) said in a statement that "the president's directive to halt the use of a leading American AI company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations."
"President Trump and Secretary Hegseth's efforts to intimidate and disparage a leading American company—potentially as the pretext to steer contracts to a preferred vendor whose model a number of federal agencies have already identified as a reliability, safety, and security threat—pose an enormous risk to US defense readiness and the willingness of the US private sector and academia to work with the IC and DOD, consistent with their own values and legal ethics," he continued.
"Indeed," he added, "Secretary Hegseth's loud insistence on the sufficiency of an 'all lawful purposes' standard provides cold comfort against the backdrop of Pentagon leadership that has routinely sidelined career military attorneys and challenged longstanding norms and rules regarding lethal force."
"Anthropic and Dario deserve credit for standing up for two very basic and obvious principles: no mass surveillance and no autonomous killer robots," said one progressive commentator.
Defense Secretary Pete Hegseth gave Anthropic until Friday evening to agree to let the Pentagon use the company's artificial intelligence technology however it wants, or else. Roughly 24 hours ahead of the deadline, CEO Dario Amodei announced that "we cannot in good conscience accede to their request," and reiterated opposition to enabling autonomous weapons or surveillance of US citizens.
Anthropic's Claude was the first AI model allowed to handle classified US military data. While the Department of Defense (DOD) has now signed an agreement with Elon Musk's xAI and "is getting close to making a deal with Google," as the New York Times reported Monday, Hegseth demanded "unfettered" access to Claude during a Tuesday meeting with Amodei.
Hegseth threatened to declare the Anthropic a "supply chain risk," effectively blacklisting it for military use and ending its current contract, or invoke the Defense Production Act, which would force Anthropic to tailor the product to the DOD’s needs, if Amodei refused to drop the company's guardrails.
The CEO responded publicly with a Thursday blog post. Using President Donald Trump's preferred name for the Pentagon, he wrote that "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner."
"However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do," Amodei continued. He explained the company's position that "using these systems for mass domestic surveillance is incompatible with democratic values."
"AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI," he wrote. "For example, under current law, the government can purchase detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns, and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life—automatically and at massive scale."
The CEO also argued that "frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk." He noted that Anthropic offered to work directly with the department on research and development to "improve the reliability of these systems, but they have not accepted this offer."
Amodei concluded by expressing hope that the Pentagon revises its position, writing that "our strong preference is to continue to serve the department and our warfighters—with our two requested safeguards in place. Should the department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
Amodei's blog post followed CBS News reporting earlier Thursday that "Pentagon officials on Wednesday night sent Anthropic their best and final offer in negotiations for use of the company's artificial intelligence technology."
It also came just hours after Pentagon spokesperson Sean Parnell responded to a related post from a Google scientist on Musk's social media platform X. The DOD official claimed that "the Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media."
"Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes. This is a simple, commonsense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions," Parnell added, noting the Friday deadline and the threat to "terminate our partnership with Anthropic and deem them a supply chain risk."
While Amodei and observers await the Pentagon's next move, several Anthropic employees, other tech experts, and critics of the Trump administration praised the CEO for "standing on principle" and choosing "war with the Department of War."
"Anthropic and Dario deserve credit for standing up for two very basic and obvious principles: no mass surveillance and no autonomous killer robots," said progressive commentator Krystal Ball. "Perhaps this is a low bar but it isn’t clear any of the other leading AI companies would put principle above profits in ANY scenario. The Pentagon is sure to make Anthropic pay for daring to defy them."
Secretary of Defense Pete Hegseth said the company that owns the AI assistant Claude would be punished unless it drops all ethical guidelines.
Defense Secretary Pete Hegseth has threatened to punish the artificial intelligence company Anthropic if it doesn't let the Pentagon use its technology however it wants—apparently even to create autonomous killer drones or conduct surveillance of Americans.
Anthropic's powerful AI model, Claude, is currently the only one permitted to handle classified military data, and the company was awarded a $200 million contract last year to develop AI capabilities for the Department of Defense to use alongside other AI firms.
However, the company's usage policy prohibits its use for mass surveillance and for the development of autonomous weapons—such as drones that attack targets without a human operator.
These limitations have infuriated the Defense Department leadership. On Tuesday, Hegseth called Anthropic's CEO, Dario Amodei, to a meeting at the Pentagon, where he demanded "unfettered" access to Claude without any guardrails.
This goal was outlined last month in the department's "AI Strategy" memo, which called for the US to adopt an "AI-first warfighting force" and for companies to allow their technology to be deployed for "any lawful use," free from ethical safeguards.
According to a senior defense official who spoke to Axios, Hegseth issued an ultimatum to Amodei on Tuesday: If he does not grant the Pentagon unrestricted use of Anthropic's technology by 5:01 pm on Friday, the department would take measures to coerce the company.
It would either declare Anthropic a "supply chain risk," effectively blacklisting it for military use and ending its contract, or it would invoke the Defense Production Act, which would force the company to tailor the product to the military's needs.
While it would not be an unusual step for the Pentagon to cut ties with Anthropic, threats to declare it a supply chain risk have been described as extraordinary.
Jessica Tillipman, the associate dean for government procurement law studies at George Washington University, who specializes in AI governance, wrote on social media that the threat of "declaring Anthropic a supply chain risk is deeply problematic," as it's "generally something we reserve for products that create security risks, and using it in this way undermines its purpose."
As Elizabeth Nolan Brown wrote on Wednesday for Reason, it "would mean anyone who wants to work with the US military in any capacity must sever ties with the AI company," which could deal a major blow to the business.
Last month, Amodei published an essay about how "AI-enabled autocracies" could use the technology to surveil and repress their citizens and wage war on less developed countries:
A swarm of millions or billions of fully automated armed drones, locally controlled by powerful AI and strategically coordinated across the world by an even more powerful AI, could be an unbeatable army, capable of both defeating any military in the world and suppressing dissent within a country by following around every citizen...
A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow. This could lead to the imposition of a true panopticon on a scale that we don’t see today.
Amodei reportedly resisted Hegseth's demands to lift restrictions at Tuesday's meeting, refusing to budge on the two key issues of mass surveillance and autonomous weapons. Following reports of the meeting, the company has said it still wants to work with the government while also ensuring its models are used in line with what they could “reliably and responsibly do.”
A senior Pentagon spokesperson said the military must be free to use the technology how it sees fit. According to the Associated Press, the official argued that "the Pentagon has only issued lawful orders and stressed that using Anthropic’s tools legally would be the military’s responsibility."
The question of whether the Pentagon has issued only "lawful" orders is in dispute—in fact, the Pentagon is fighting to cut the retirement pay of Sen. Mark Kelly (D-Ariz.), a retired Navy captain, after he made a video in November reminding active duty troops that they have a duty not to obey illegal orders.
That video was made in response to reports that Hegseth had given orders to bomb the survivors of one of the administration's boat strikes in the Caribbean—an act described as a potential "war crime" amid a broader campaign that legal experts have said is illegal under both US and international law.
The military also reportedly used Claude as part of another legally questionable act last month: the operation to kidnap Venezuelan President Nicolás Maduro, which involved bombing across Caracas and killed at least 83 people. It is not clear how the model was used during the attack.
While the Pentagon has not specified which restricted activities it wishes to pursue using Anthropic's technology, Sen. Ruben Gallego (D-Ariz.) said that with his demands, Hegseth was essentially telling the company, "Let us use your AI for mass surveillance, or we’ll pull your contract."
Under President Donald Trump, Gallego added, “corporations are punished for refusing to spy on American citizens.”