SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
We should all be frightened by this use of AI for death and destruction. But this is not new. Israel and the U.S. have been testing and using AI in Palestine for years.
Earlier this month, the company that brings us ChatGPT announced its partnership with California-based weapons company, Anduril, to produce AI weapons. The OpenAI-Anduril system, which was tested in California at the end of November, permits the sharing of data between external parties for decision making on the battlefield. This fits squarely within the US military and OpenAI’s plans to normalize the use of AI on the battlefield.
Anduril, based in Costa Mesa, makes AI-powered drones, missiles, and radar systems, including surveillance towers, Sentry systems, currently used at US military bases worldwide as well as the US-Mexico border and on the British coastline to detect migrants on boats. On December 3rd, they received a three-year contract with the Pentagon for a system that gives soldiers AI solutions during attacks.
In January, OpenAI deleted a direct ban in their usage policy on “activity that has high risk of physical harm” which specifically included “military and warfare” and “weapons development.” Less than one week after doing so, the company announced a partnership with the Pentagon in cybersecurity.
While they might have removed a ban on making weapons, OpenAI’s lurch into the war industry is in total antithesis to its own charter. Their own proclamation to build “safe and beneficial AGI [Artificial Generative Intelligence]” that does not “harm humanity” is laughable when they are using technology to kill. ChatGPT could feasibly, and probably soon will, write code for an automated weapon, analyze information for bombings, or assist invasions and occupations.
OpenAI’s lurch into the war industry is in total antithesis to its own charter.
We should all be frightened by this use of AI for death and destruction. But this is not new. Israel and the US have been testing and using AI in Palestine for years. In fact, Hebron has been dubbed a “smart city” as the occupation enforces its tyranny through a perforation of motion and heat sensors, facial recognition technologies, and CCTV surveillance. At the center of this oppressive surveillance is the Blue Wolf System, an AI tool that scans the faces of Palestinians, when they are photographed by Israeli occupation soldiers, and refers to a biometric database in which information about them is stored. Upon inputting the photo into the system, each person is classified by a color-coded rating based on their perceived ‘threat level’ to dictate whether the soldier should allow them to pass or arrest them. The IOF soldiers are rewarded with prizes for taking the most photographs, which they have termed “Facebook for Palestinians”, according to revelations from the Washington Post in 2021.
OpenAI’s war technology comes as the Biden administration is pushing for the US to use the technology to “fulfill national security objectives.” This was in fact part of the title of a White House memorandum released in October this year calling for rapid development of artificial intelligence “especially in the context of national security systems.” While not explicitly naming China, it is clear that a perceived ‘AI arms race’ with China is also a central motivation of the Biden administration for such a call. Not solely is this for weapons for war, but also racing for the development of technology writ large. Earlier this month, the US banned the export of HBM chips to China, a critical component of AI and high-level graphics processing units (GPU). Former Google CEO Eric Schmidt warned that China is two to three years ahead of the US when it comes to AI, a major change from his statements earlier this year where he remarked that the US is ahead of China. When he says there is a “threat escalation matrix” when there are developments in AI, he reveals that the US sees the technology only as a tool of war and a way to assert hegemony. AI is the latest in the US’ unrelenting - and dangerous - provocation and fear mongering with China, who they cannot bear to see advance them.In response to the White House memorandum, OpenAI released a statement of its own where it re-asserted many of the White House’s lines about “democratic values” and “national security.” But what is democratic about a company developing technology to better target and bomb people? Who is made secure by the collection of information to better determine war technology? This surely reveals the alignment of the company with the Biden administration’s anti-China rhetoric and imperialist justifications. As the company that has surely pushed AGI systems within general society, it is deeply alarming that they have ditched all codes and jumped right in with the Pentagon. While it’s not surprising that companies like Palantir or even Anduril itself are using AI for war, from companies like OpenAI - a supposedly mission-driven nonprofit - we should expect better.
AI is being used to streamline killing. At the US-Mexico border, in Palestine, and in US imperial outposts across the globe. While AI systems seem innocently embedded within our daily lives, from search engines to music streaming sites, we must forget these same companies are using the same technology lethally. While ChatGPT might give you ten ways to protest, it is likely being trained to kill, better and faster.
From the war machine to our planet, AI in the hands of US imperialists means only more profits for them and more devastation and destruction for us all.
Advocacy groups and experts are pressuring Congress and federal regulators to "put meaningful, enforceable guardrails in place."
Amid rising global fears about the dangers of artificial intelligence, campaigners and experts applauded U.S. President Joe Biden's administration on Friday for securing voluntary risk management commitments from seven leading AI companies while also emphasizing the need for much more from lawmakers and regulators.
"I'm very happy to see this modest, but necessary, step on the way to proper governance of AI. It is all voluntary at this stage, yet good to get these norms agreed. Hopefully it is a step on a much longer path," said Toby Ord, a senior research fellow at the U.K.'s University of Oxford and author of The Precipice: Existential Risk and the Future of Humanity.
Rob Reich, a faculty associate director at Stanford University's Institute for Human-Centered Artificial Intelligence, tweeted that "this is a big step forward for AI governance," and it is "great to see" Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI "coordinating on baseline norms of responsible AI development."
"We need enforceable accountability measures and requirements to roll out AI responsibly and mitigate the risks and potential harms to individuals, including bias and discrimination."
Alexandra Reeve Givens, CEO of the Center for Democracy & Technology (CDT), called the announcement "a welcome step toward promoting trustworthy and secure AI systems."
"Red team testing, information sharing, and transparency around risks are all essential elements of achieving AI safety," Reeve Givens said. "The commitment to develop mechanisms to disclose to users when content is AI-generated offers the potential to reduce fraud and mis- and disinformation."
"These voluntary undertakings are only a first step. We need enforceable accountability measures and requirements to roll out AI responsibly and mitigate the risks and potential harms to individuals, including bias and discrimination," she stressed. "CDT looks forward to continuing to work with the administration and Congress in putting these safeguards in place."
Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center (EPIC), had a similar response.
"While EPIC appreciates the Biden administration's use of its authorities to place safeguards on the use of artificial intelligence, we both agree that voluntary commitments are not enough when it comes to Big Tech," she said. "Congress and federal regulators must put meaningful, enforceable guardrails in place to ensure the use of AI is fair, transparent, and protects individuals' privacy and civil rights."
Biden brought together leaders from the companies to announce eight commitments that the White House said "underscore three principles that must be fundamental to the future of AI: safety, security, and trust."
As the White House outlined, the firms are pledging to:
"There is much more work underway," according to a White House fact sheet, which says the "administration is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation."
Brown University computer and data science professor Suresh Venkatasubramania, a former Biden tech adviser who helped co-author the administration's Blueprint for an AI Bill of Rights, said in a series of tweets about the Friday agreement that "on process, there's good stuff here," but "on content, it's a bit of a mixed bag."
While recognizing the need for additional action, Venkatasubramania also said that voluntary efforts help show that "adding guardrails in the development of public-facing systems isn't the end of the world or even the end of innovation."
The White House fact sheet says that "as we advance this agenda at home, the administration will work with allies and partners to establish a strong international framework to govern the development and use of AI. It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the U.K."
Gabriela Zanfir-Fortuna of the Future of Privacy Forum pointed out that the European Union was not listed as a partner.
As Common Dreams reported last month, the European Parliament passed a draft law that would strictly regulate the use of artificial intelligence, and now, members of the legislative body are negotiating a final version with the E.U.'s executive institutions.
The fact sheet adds that "the United States seeks to ensure that these commitments support and complement Japan's leadership of the G7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom's leadership in hosting a Summit on AI Safety, and India's leadership as chair of the Global Partnership on AI."
Noting that portion of the document, Zanfir-Fortuna tweeted: "What is missing from the list? The Council of Europe's ongoing process to adopt an international agreement on AI."
"We cannot allow generative AI to promote a parasitic economy that diverts financial resources that should benefit the news media," said one advocate.
Warning of the ongoing expansion of artificial intelligence-generated websites that resemble legitimate news outlets and draw ad revenue away from them, Reporters Without Borders on Wednesday implored search engines and advertisers to slow the spread of automated "content farms" by denying them access to "funds that should be reserved for real journalism."
"We cannot allow generative AI to promote a parasitic economy that diverts financial resources that should benefit the news media," Vincent Berthier, head of the Tech Desk at Reporters Without Borders (RSF), said in a statement.
"As well as an overall fall in the quality of online information, there is also a real danger of a further decline in funding essential to online media," said Berthier. "We urge search engines and advertisers not to allow these AI-generated sites to become profitable."
"As well as an overall fall in the quality of online information, there is also a real danger of a further decline in funding essential to online media."
Earlier this month, NewsGuard, which evaluates the reliability of online news and information, published an analysis entitled Rise of the Newsbots: AI-Generated News Websites Proliferating Online.
The report identified at least 49 ostensible news websites "spanning seven languages—Chinese, Czech, English, French, Portuguese, Tagalog, and Thai—that appear to be entirely or mostly generated by artificial intelligence language models designed to mimic human communication."
These automated content farms, which reach millions of internet users, "churn out vast amounts of clickbait articles to optimize advertising revenue," NewsGuard noted, exacerbating the dangerous worldwide spread of misinformation in the process.
As RSF noted Wednesday:
Dressed up to look like media, some of these sites rewrite journalistic content plundered from real news sites. Others produce fake stories or mediocre content designed solely to attract traffic. One reported in April that Joe Biden had died. Another falsely reported that Ukraine had claimed that it killed 3,870 Russian soldiers in a single attack.
Generated by AI and usually run anonymously, some of these sites "publish hundreds of articles a day," according to NewsGuard. There is a real risk that the Internet will soon be flooded by many more of these sites pumping out garbage that will inevitably congest search engines, with the result that reliable news reporting will struggle to make itself visible.
The modus operandi of these sites is very simple—maximize clicks while minimizing effort in order to optimize profit. "Many of the sites are saturated with advertisements," says NewsGuard, "indicating that they were likely designed to generate revenue from programmatic ads—ads that are placed algorithmically across the web."
"Advertisers have a huge responsibility," RSF continued. "These content farms will inevitably proliferate if they can continue to make money from advertising. The ad industry must give a firm undertaking to ensure ads are placed above all with media that are reliable news sources."
The watchdog also urged the ad industry "to manage programmatic advertising mechanisms responsibly and to acquire the monitoring and control tools needed to ensure that these content farms do not become profitable."
RSF is pushing advertisers to curb the rapid spread of automated clickbait just weeks after it warned in its annual press freedom report that the fast-growing, AI-powered "fake content industry" threatens to undermine fact-based journalism around the globe, which is already at risk due to old-fashioned violence against reporters, who are being jailed and killed at alarming rates.