SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
We should all be frightened by this use of AI for death and destruction. But this is not new. Israel and the U.S. have been testing and using AI in Palestine for years.
Earlier this month, the company that brings us ChatGPT announced its partnership with California-based weapons company, Anduril, to produce AI weapons. The OpenAI-Anduril system, which was tested in California at the end of November, permits the sharing of data between external parties for decision making on the battlefield. This fits squarely within the US military and OpenAI’s plans to normalize the use of AI on the battlefield.
Anduril, based in Costa Mesa, makes AI-powered drones, missiles, and radar systems, including surveillance towers, Sentry systems, currently used at US military bases worldwide as well as the US-Mexico border and on the British coastline to detect migrants on boats. On December 3rd, they received a three-year contract with the Pentagon for a system that gives soldiers AI solutions during attacks.
In January, OpenAI deleted a direct ban in their usage policy on “activity that has high risk of physical harm” which specifically included “military and warfare” and “weapons development.” Less than one week after doing so, the company announced a partnership with the Pentagon in cybersecurity.
While they might have removed a ban on making weapons, OpenAI’s lurch into the war industry is in total antithesis to its own charter. Their own proclamation to build “safe and beneficial AGI [Artificial Generative Intelligence]” that does not “harm humanity” is laughable when they are using technology to kill. ChatGPT could feasibly, and probably soon will, write code for an automated weapon, analyze information for bombings, or assist invasions and occupations.
OpenAI’s lurch into the war industry is in total antithesis to its own charter.
We should all be frightened by this use of AI for death and destruction. But this is not new. Israel and the US have been testing and using AI in Palestine for years. In fact, Hebron has been dubbed a “smart city” as the occupation enforces its tyranny through a perforation of motion and heat sensors, facial recognition technologies, and CCTV surveillance. At the center of this oppressive surveillance is the Blue Wolf System, an AI tool that scans the faces of Palestinians, when they are photographed by Israeli occupation soldiers, and refers to a biometric database in which information about them is stored. Upon inputting the photo into the system, each person is classified by a color-coded rating based on their perceived ‘threat level’ to dictate whether the soldier should allow them to pass or arrest them. The IOF soldiers are rewarded with prizes for taking the most photographs, which they have termed “Facebook for Palestinians”, according to revelations from the Washington Post in 2021.
OpenAI’s war technology comes as the Biden administration is pushing for the US to use the technology to “fulfill national security objectives.” This was in fact part of the title of a White House memorandum released in October this year calling for rapid development of artificial intelligence “especially in the context of national security systems.” While not explicitly naming China, it is clear that a perceived ‘AI arms race’ with China is also a central motivation of the Biden administration for such a call. Not solely is this for weapons for war, but also racing for the development of technology writ large. Earlier this month, the US banned the export of HBM chips to China, a critical component of AI and high-level graphics processing units (GPU). Former Google CEO Eric Schmidt warned that China is two to three years ahead of the US when it comes to AI, a major change from his statements earlier this year where he remarked that the US is ahead of China. When he says there is a “threat escalation matrix” when there are developments in AI, he reveals that the US sees the technology only as a tool of war and a way to assert hegemony. AI is the latest in the US’ unrelenting - and dangerous - provocation and fear mongering with China, who they cannot bear to see advance them.In response to the White House memorandum, OpenAI released a statement of its own where it re-asserted many of the White House’s lines about “democratic values” and “national security.” But what is democratic about a company developing technology to better target and bomb people? Who is made secure by the collection of information to better determine war technology? This surely reveals the alignment of the company with the Biden administration’s anti-China rhetoric and imperialist justifications. As the company that has surely pushed AGI systems within general society, it is deeply alarming that they have ditched all codes and jumped right in with the Pentagon. While it’s not surprising that companies like Palantir or even Anduril itself are using AI for war, from companies like OpenAI - a supposedly mission-driven nonprofit - we should expect better.
AI is being used to streamline killing. At the US-Mexico border, in Palestine, and in US imperial outposts across the globe. While AI systems seem innocently embedded within our daily lives, from search engines to music streaming sites, we must forget these same companies are using the same technology lethally. While ChatGPT might give you ten ways to protest, it is likely being trained to kill, better and faster.
From the war machine to our planet, AI in the hands of US imperialists means only more profits for them and more devastation and destruction for us all.
"All signs point to the Pentagon developing 'killer robots' via Replicator, despite deflections from Pentagon representatives themselves," according to Public Citizen.
A report from the government watchdog Public Citizen released Friday gives the who, what, when, where, and why of the Pentagon's flagship Replicator initiative—a program to increase the number of weapons, particularly drones, in the hands of the U.S. military.
In the report, Public Citizen re-ups concerns about one particular aspect of the program. According to the report's author, Savannah Wooten, the Defense Department has remained ambiguous on the question of whether it is developing artificial intelligence weapons that can "deploy lethal force autonomously—without a human authorizing the specific use of force in a specific context." These types of weapons are also known as "killer robots."
"It is not yet clear whether or not these technologies are designed, tested, or intended for killing," according to the report.
"All signs point to the Pentagon developing 'killer robots' via Replicator, despite deflections from Pentagon representatives themselves," wrote Wooten in the summary of the report.
The program, which was announced last year, is part of the Department of Defense's plan to deter China.
"Replicator is meant to help us overcome [China's] biggest advantage, which is mass. More ships. More missiles. More people," said Deputy Secretary of Defense Kathleen Hicks in a speech announcing the project last year. That mission will be achieved specifically by "mastering the technology of tomorrow," Hicks said.
There will soon be a "Replicator 2.0" that will focus on counter-drone technologies—per a memo from the defense secretary released in September—according to Public Citizen's report.
In a letter sent in March, Public Citizen and 13 other civil society groups highlighted remarks Hicks made in 2023 as an example of the ambiguity the Pentagon has created around the issue.
"Autonomous weapons are inherently dehumanizing and unethical, no matter whether a human is 'ultimately' responsible for the use of force or not. Deploying lethal artificial intelligence weapons in battlefield conditions necessarily means inserting them into novel conditions for which they have not been programmed, an invitation for disastrous outcomes," the organizations wrote to Hicks and Secretary of Defense Lloyd Austin.
Wooten's report reiterates that same call: "The Pentagon owes Americans clarity about its own role in advancing the autonomous weapons arms race via Replicator, as well as a detailed plan for ensuring it does not open a Pandora’s Box of new, lethal weapons on the world by refusing to hold its own operations accountable."
Additionally, "'Artificial intelligence' should not be used as a catchall justification to summon billions more in Pentagon spending, especially when the existing annual budget for the U.S. military already dwarfs every other U.S. agency and is careening towards the $1 trillion mark," Wooten wrote.
The fear that these types of weapons would open a Pandora's Box—and set off a "reckless, dangerous arms race," as Public Citizen warned of Friday—is not new. Back in 2017, dozens of artificial intelligence and robotics experts published a letter urging the United Nations to ban the development and use of so-called killer robots. As drone warfare has grown, those calls have continued.
The report also highlights the public statements of the head of one defense contractor that has been selected to produce for the Replicator initiative as a hint that the program is aimed at creating weapons that are capable of autonomous lethal force.
In early October, CEO of Anduril Palmer Luckey said that, "societies have always needed a warrior class that is enthused and excited about enacting violence on others in pursuit of good aims."
"You need people like me who are sick in that way and who don't lose any sleep making tools of violence in order to preserve freedom," he said.
"Without explicit legal rules, the world faces a grim future of automated killing that will place civilians everywhere in grave danger," one human rights expert said.
As drone warfare continues to proliferate worldwide and concerns grow over the use of artificial intelligence by militaries, Human Rights Watch on Monday backed United Nations' Secretary-General António Guterres' call for an international treaty to ban "killer robots" that select and attack targets without human oversight.
In an August 6 report, Guterres urged the international community to negotiate a treaty prohibiting lethal autonomous weapons systems by 2026. This is a widely supported idea, as 47 of the 58 submissions to the report from more than 73 countries endorsed either a ban or increased regulations.
"The U.N, secretary-general emphasizes the enormous detrimental effects removing human control over weapons systems would have on humanity," Mary Wareham, deputy crisis, conflict, and arms director at Human Rights Watch, said in a statement. "The already broad international support for tackling this concern should spur governments to start negotiations without delay."
The momentum behind a ban is building as armies around the world are increasingly deploying and testing militarized robots and drones. The U.S. military became the first nation to widely deploy drone warfare during its War on Terror campaigns in Afghanistan, Pakistan, Yemen, and Iraq following the attacks of 9/11. But today, from Israel's use of drones in Gaza and the West Bank to China and Russia's growing arsenals, the use of remotely operated weapons is rapidly expanding.
HRW's call came the same day that Russia launched an attack on Ukrainian energy infrastructure using around 200 missiles and drones. The attack killed at least five people and knocked out power and water in several parts of the country, Reuters reported.
"It was one of the biggest combined strikes. More than a hundred missiles of various types and about a hundred Shahed drones. And like most previous Russian strikes, this one is just as sneaky, targeting critical civilian infrastructure," Ukrainian President Volodymyr Zelenskyy said on Telegram.
Ukraine has also used long-range attack drones against Russia, targeting sites such as oil refineries and military airfields.
Also on Monday, North Korean state media reported that the country's leader Kim Jong Un had supervised a test of a North Korean attack drone.
Pictures showed a drone colliding with a target that looked like a South Korean K-2 main battle tank and obliterating it an a fiery explosion.
The North Korean test coincides with increased tensions with South Korea and the U.S. as well as a joint exercise by the two countries to prepare their militaries for a potential conflict with North Korea.
Noting their importance in modern warfare, Kim said he wanted North Korea to be equipped with drones "as soon as possible" and urged the manufacturing of several types including exploding drones, attack drones, and underwater suicide drones, according to North Korean state media.
Drones featured in another global hot spot as China sent two drones over the sea between Taiwan and Japan's westernmost island of Yonaguni on Friday, as the Joint Staff Office under Japan's Defense Ministry observed.
China's move followed two actions by the U.S. military: the first ever U.S. Marine Corps deployment of a radar capable of sensing aerial threats including drones from Yonaguni on July 29 during exercises and the sending of the destroyer USS Ralph Johnson to the Taiwan Strait on Thursday.
Israel has also used drones "systematically" in its ongoing war on Gaza.
In a February report, the Euro-Mediterranean Human Rights Monitor said it had confirmed that dozens of civilians had been killed by "small killer drones," including the Matrice 600 and LANIUS models. The drones were equipped with explosives, machine guns, and artificial intelligence.
"Israel is intentionally using them to target Palestinian civilians in the Gaza Strip," Euro-Med said, adding that "the majority of Israel's targeting takes place in public spaces where it is easy to distinguish fighters from civilians."
World leaders will have a chance to curb the proliferation of drones in warfare in New York in September, when they will convene at U.N. headquarters for the Summit of the Future, an initiative of Guterres.
The summit is expected to produce a "Pact for the Future," the current draft of which recommends acting "with urgency" toward international control of killer robots.
"The Summit of the Future provides an important opportunity for states to express high-level support for opening negotiations to ban and restrict autonomous weapons systems," Wareham said. "Without explicit legal rules, the world faces a grim future of automated killing that will place civilians everywhere in grave danger."