SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"Many nations are looking to Israel and its use of AI in Gaza with admiration and jealousy," said one expert. "Expect to see a form of Google, Microsoft, and Amazon-backed AI in other war zones soon."
Several recent journalistic investigations—including one published Tuesday by The Associated Press—have deepened the understanding of how Israeli forces are using artificial intelligence and cloud computing systems sold by U.S. tech titans for the mass surveillance and killing of Palestinians in Gaza.
The AP's Michael Biesecker, Sam Mednick, and Garance Burke found that Israel's use of Microsoft and OpenAI technology "skyrocketed" following Hamas' October 7, 2023 attack on Israel.
"This is the first confirmation we have gotten that commercial AI models are directly being used in warfare," Heidy Khlaaf, chief artificial intelligence scientist at the AI Now Institute and a former senior safety engineer at OpenAI, which makes ChatGPT, told the AP. "The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward."
As Biesecker, Mednick, and Burke noted:
Israel's goal after the attack that killed about 1,200 people and took over 250 hostages was to eradicate Hamas, and its military has called AI a "game changer" in yielding targets more swiftly. Since the war started, more than 50,000 people have died in Gaza and Lebanon and nearly 70% of the buildings in Gaza have been devastated, according to health ministries in Gaza and Lebanon.
According to the AP report, Israel buys advanced AI models from OpenAI and Microsoft's Azure cloud platform. While OpenAI said it has no partnership with the Israel Defense Forces (IDF), in early 2024 the company quietly removed language from its usage policy that prohibited military use of its technology.
The AP reporters also found that Google and Amazon provide cloud computing and AI services to the IDF via Project Nimbus, a $1.2 billion contract signed in 2021. Furthermore, the IDF uses Cisco and Dell server farms or data centers. Red Hat, an independent IBM subsidiary, sells cloud computing services to the IDF. Microsoft partner Palantir Technologies also has a "strategic partnership" with Israel's military.
Google told the AP that the company is committed to creating AI "that protects people, promotes global growth, and supports national security."
However, Google recently removed from its Responsible AI principles a commitment to not use AI for the development of technology that could cause "overall harm," including weapons and surveillance.
The AP investigation follows a Washington Post probe published last month detailing how Google has been "directly assisting" the IDF and Israel's Ministry of Defense "despite the company's efforts to publicly distance itself from the country's national security apparatus after employee protests against a cloud computing contract with Israel's government."
Google fired dozens of workers following their participation in "No Tech for Apartheid" protests against the use of the company's products and services by forces accused of genocide in Gaza.
"A Google employee warned in one document that if the company didn't quickly provide more access, the military would turn instead to Google's cloud rival Amazon, which also works with Israel's government under the Nimbus contract," wrote Gerrit De Vynck, author of the Post report.
"As recently as November 2024, by which time a year of Israeli airstrikes had turned much of Gaza to rubble, documents show Israel's military was still tapping Google for its latest AI technology," De Vynck added. "Late that month, an employee requested access to the company's Gemini AI technology for the IDF, which wanted to develop its own AI assistant to process documents and audio, according to the documents."
Previous investigations have detailed how the IDF also uses Habsora, an Israeli AI system that can automatically select airstrike targets at an exponentially faster rate than ever before.
"In the past, there were times in Gaza when we would create 50 targets per year. And here the machine produced 100 targets in one day," former IDF Chief of Staff Aviv Kochavi told Yuval Abraham of +972 Magazine, a joint Israeli-Palestinian publication, in 2023. Another intelligence source said that Habsora has transformed the IDF into a "mass assassination factory" in which the "emphasis is on quantity and not quality" of kills.
Compounding the crisis, in the heated hours following the October 7 attack, mid-ranking IDF officers were empowered to order attacks on not only senior Hamas commanders but any fighter in the resistance group, no matter how junior. What's more, the officers were allowed to risk up to 20 civilian lives in each strike, and up to 500 noncombatant lives per day. Days later, that limit was lifted. Officers could order any number of strikes as they believed were legal, with no limits on civilian harm.
Senior IDF commanders sometimes approved strikes they knew could kill more than 100 civilians if the target was deemed important enough. In one AI-aided airstrike targeting one senior Hamas commander, the IDF dropped multiple U.S.-supplied 2,000-pound bombs, which can level an entire city block, on the Jabalia refugee camp in October 2023. According to the U.K.-based airstrike monitor Airwars, the bombing killed at least 126 people, 68 of them children, and wounded 280 others. Hamas' Qassam Brigades said four Israeli and three international hostages were also killed in the attack.
Then there's the mass surveillance element. Independent journalist Antony Loewenstein recently wrote for Middle East Eye that "corporate behemoths are storing massive amounts of information about every aspect of Palestinian life in Gaza, the occupied West Bank, and elsewhere."
"How this data will be used, in a time of war and mass surveillance, is obvious," Loewenstein continued. "Israel is building a huge database, Chinese-state style, on every Palestinian under occupation: what they do, where they go, who they see, what they like, what they want, what they fear, and what they post online."
"Palestinians are guinea pigs—but this ideology and work doesn't stay in Palestine," he said. "Silicon Valley has taken note, and the new Trump era is heralding an ever-tighter alliance among Big Tech, Israel, and the defense sector. There's money to be made, as AI currently operates in a regulation-free zone globally."
"Think about how many other states, both democratic and dictatorial, would love to have such extensive information about every citizen, making it far easier to target critics, dissidents, and opponents," Loewenstein added. "With the
far right on the march globally—from Austria to Sweden, France to Germany, and the U.S. to Britain—Israel's ethno-nationalist model is seen as attractive and worth mimicking.
"While the media and the U.S. Congress have devoted much attention to the purported benefits of exploiting cutting-edge technologies for military use, far less has been said about the risks involved."
Emerging technologies including artificial intelligence, lethal autonomous weapons systems, and hypersonic missiles pose a potentially existential threat that underscores the imperative of arms control measures to slow the pace of weaponization, according to a new report published Tuesday.
The Arms Control Association report—entitled Assessing the Dangers: Emerging Military Technologies and Nuclear (In)Stability—"unpacks the concept of 'emerging technologies' and summarizes the debate over their utilization for military purposes and their impact on strategic stability."
The publication notes that the world's military powers "have sought to exploit advanced technologies—artificial intelligence, autonomy, cyber, and hypersonics, among others—to gain battlefield advantages" but warns too little has been said about the dangers these weapons represent.
"Some officials and analysts posit that such emerging technologies will revolutionize warfare, making obsolete the weapons and strategies of the past," the report states. "Yet, before the major powers move quickly ahead with the weaponization of these technologies, there is a great need for policymakers, defense officials, diplomats, journalists, educators, and members of the public to better understand the unintended and hazardous outcomes of these technologies."
\u201cA new @ArmsControlNow report assesses the extent to which military use of emerging tech could result in an accidental use of nuclear weapons in a crisis, and provides a framework for curtailing the indiscriminate weaponization of such tech.\n\nAvailable at https://t.co/gPyDbcaOcd\u201d— Arms Control Assoc (@Arms Control Assoc) 1675774840
Lethal autonomous weapons systems—defined by the Campaign to Stop Killer Robots as armaments that operate independent of "meaningful human control"—are being developed by nations including China, Israel, Russia, South Korea, the United Kingdom, and the United States. The U.S. Air Force's sci-fi-sounding Skyborg Autonomous Control System, currently under development, is, according to the report, "intended to control multiple drone aircraft simultaneously and allow them to operate in 'swarms,' coordinating their actions with one another with minimum oversight by human pilots."
"Although the rapid deployment of such systems appears highly desirable to many military officials, their development has generated considerable alarm among diplomats, human rights campaigners, arms control advocates, and others who fear that deploying fully autonomous weapons in battle would severely reduce human oversight of combat operations, possibly resulting in violations of international law, and could weaken barriers that restrain escalation from conventional to nuclear war," the report notes.
The latter half of the 20th century witnessed numerous nuclear close calls, many based on misinterpretations, limitations, or outright failures of technology. While technologies like artificial intelligence (AI) are often touted as immune to human fallibility, the research suggests that such claims and hubris could have deadly and unforeseen consequences.
"The major powers are rushing ahead with the weaponization of advanced technologies before they have fully considered—let alone attempted to mitigate—the consequences of doing so."
"An increased reliance on AI could lead to new types of catastrophic mistakes," a 2018 report by the Rand Corporation warned. "There may be pressure to use it before it is technologically mature; it may be susceptible to adversarial subversion; or adversaries may believe that the AI is more capable than it is, leading them to make catastrophic mistakes."
While the Pentagon in 2020 adopted five principles for what it calls the "ethical" use of AI, many ethicists argue the only safe course of action is a total ban on lethal autonomous weapons systems.
Hypersonic missiles, which can travel at speeds of Mach 5—five times the speed of sound—or faster, are now part of at least the U.S., Chinese, and Russian arsenals. Last year, Russian officials acknowledged deploying Kinzhal hypersonic missiles three times during the country's invasion of Ukraine in what is believed to be the first-ever use of such weapons in combat. In recent years, China has tested multiple hypersonic missile variants using specially designed high-altitude balloons. Countries including Australia, France, India, Japan, Germany, Iran, and North Korea are also developing hypersonic weapons.
\u201cDARPA\u2019s HAWC program is a wrap\u2026concluding with a successful @LockheedMartin #hypersonic missile flying more than 300 nautical miles and lots of data for the @usairforce. More: https://t.co/Yqq2Xl50jn\u201d— DARPA (@DARPA) 1675093457
The report also warns of the escalatory potential of cyberwarfare and automated battlefield decision-making.
"As was the case during World Wars I and II, the major powers are rushing ahead with the weaponization of advanced technologies before they have fully considered—let alone attempted to mitigate—the consequences of doing so, including the risk of significant civilian casualties and the accidental or inadvertent escalation of conflict," Michael Klare, a board member at the Arms Control Association and the report's lead author, said in a statement.
"While the media and the U.S. Congress have devoted much attention to the purported benefits of exploiting cutting-edge technologies for military use, far less has been said about the risks involved," he added.
The report asserts that bilateral and multilateral agreements between countries that "appreciate the escalatory risks posed by the weaponization of emerging technologies" are critical to minimizing those dangers.
"As an example of a useful first step, the leaders of the major nuclear powers could jointly pledge to eschew cyberattacks" against each other's command, control, communications, and information (C3I) systems, the report states. A code of conduct governing the military use of artificial intelligence based on the Pentagon's AI ethics principles is also recommended.
"If the major powers are prepared to discuss binding restrictions on the military use of destabilizing technologies, certain priorities take precedence," the paper argues. "The first would be an agreement or agreements prohibiting attacks on the nuclear C3I systems of another state by cyberspace means or via missile strikes, especially hypersonic strikes."
"Another top priority would be measures aimed at preventing swarm attacks by autonomous weapons on another state's missile submarines, mobile ICBMs, and other second-strike retaliatory systems," the report continues, referring to intercontinental ballistic missiles. "Strict limitations should be imposed on the use of automated decision-support systems with the capacity to inform or initiate major battlefield decisions, including a requirement that humans exercise ultimate control over such devices."
"Without the adoption of measures such as these, cutting-edge technologies will be converted into military systems at an ever-increasing tempo, and the dangers to world security will grow apace," the publication concluded. "A more thorough understanding of the distinctive threats to strategic stability posed by these technologies and the imposition of restraints on their military use would go a long way toward reducing the risks of Armageddon."