SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
Advocacy groups and experts are pressuring Congress and federal regulators to "put meaningful, enforceable guardrails in place."
Amid rising global fears about the dangers of artificial intelligence, campaigners and experts applauded U.S. President Joe Biden's administration on Friday for securing voluntary risk management commitments from seven leading AI companies while also emphasizing the need for much more from lawmakers and regulators.
"I'm very happy to see this modest, but necessary, step on the way to proper governance of AI. It is all voluntary at this stage, yet good to get these norms agreed. Hopefully it is a step on a much longer path," said Toby Ord, a senior research fellow at the U.K.'s University of Oxford and author of The Precipice: Existential Risk and the Future of Humanity.
Rob Reich, a faculty associate director at Stanford University's Institute for Human-Centered Artificial Intelligence, tweeted that "this is a big step forward for AI governance," and it is "great to see" Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI "coordinating on baseline norms of responsible AI development."
"We need enforceable accountability measures and requirements to roll out AI responsibly and mitigate the risks and potential harms to individuals, including bias and discrimination."
Alexandra Reeve Givens, CEO of the Center for Democracy & Technology (CDT), called the announcement "a welcome step toward promoting trustworthy and secure AI systems."
"Red team testing, information sharing, and transparency around risks are all essential elements of achieving AI safety," Reeve Givens said. "The commitment to develop mechanisms to disclose to users when content is AI-generated offers the potential to reduce fraud and mis- and disinformation."
"These voluntary undertakings are only a first step. We need enforceable accountability measures and requirements to roll out AI responsibly and mitigate the risks and potential harms to individuals, including bias and discrimination," she stressed. "CDT looks forward to continuing to work with the administration and Congress in putting these safeguards in place."
Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center (EPIC), had a similar response.
"While EPIC appreciates the Biden administration's use of its authorities to place safeguards on the use of artificial intelligence, we both agree that voluntary commitments are not enough when it comes to Big Tech," she said. "Congress and federal regulators must put meaningful, enforceable guardrails in place to ensure the use of AI is fair, transparent, and protects individuals' privacy and civil rights."
Biden brought together leaders from the companies to announce eight commitments that the White House said "underscore three principles that must be fundamental to the future of AI: safety, security, and trust."
As the White House outlined, the firms are pledging to:
"There is much more work underway," according to a White House fact sheet, which says the "administration is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation."
Brown University computer and data science professor Suresh Venkatasubramania, a former Biden tech adviser who helped co-author the administration's Blueprint for an AI Bill of Rights, said in a series of tweets about the Friday agreement that "on process, there's good stuff here," but "on content, it's a bit of a mixed bag."
While recognizing the need for additional action, Venkatasubramania also said that voluntary efforts help show that "adding guardrails in the development of public-facing systems isn't the end of the world or even the end of innovation."
The White House fact sheet says that "as we advance this agenda at home, the administration will work with allies and partners to establish a strong international framework to govern the development and use of AI. It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the U.K."
Gabriela Zanfir-Fortuna of the Future of Privacy Forum pointed out that the European Union was not listed as a partner.
As Common Dreams reported last month, the European Parliament passed a draft law that would strictly regulate the use of artificial intelligence, and now, members of the legislative body are negotiating a final version with the E.U.'s executive institutions.
The fact sheet adds that "the United States seeks to ensure that these commitments support and complement Japan's leadership of the G7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom's leadership in hosting a Summit on AI Safety, and India's leadership as chair of the Global Partnership on AI."
Noting that portion of the document, Zanfir-Fortuna tweeted: "What is missing from the list? The Council of Europe's ongoing process to adopt an international agreement on AI."
"We cannot allow generative AI to promote a parasitic economy that diverts financial resources that should benefit the news media," said one advocate.
Warning of the ongoing expansion of artificial intelligence-generated websites that resemble legitimate news outlets and draw ad revenue away from them, Reporters Without Borders on Wednesday implored search engines and advertisers to slow the spread of automated "content farms" by denying them access to "funds that should be reserved for real journalism."
"We cannot allow generative AI to promote a parasitic economy that diverts financial resources that should benefit the news media," Vincent Berthier, head of the Tech Desk at Reporters Without Borders (RSF), said in a statement.
"As well as an overall fall in the quality of online information, there is also a real danger of a further decline in funding essential to online media," said Berthier. "We urge search engines and advertisers not to allow these AI-generated sites to become profitable."
"As well as an overall fall in the quality of online information, there is also a real danger of a further decline in funding essential to online media."
Earlier this month, NewsGuard, which evaluates the reliability of online news and information, published an analysis entitled Rise of the Newsbots: AI-Generated News Websites Proliferating Online.
The report identified at least 49 ostensible news websites "spanning seven languages—Chinese, Czech, English, French, Portuguese, Tagalog, and Thai—that appear to be entirely or mostly generated by artificial intelligence language models designed to mimic human communication."
These automated content farms, which reach millions of internet users, "churn out vast amounts of clickbait articles to optimize advertising revenue," NewsGuard noted, exacerbating the dangerous worldwide spread of misinformation in the process.
As RSF noted Wednesday:
Dressed up to look like media, some of these sites rewrite journalistic content plundered from real news sites. Others produce fake stories or mediocre content designed solely to attract traffic. One reported in April that Joe Biden had died. Another falsely reported that Ukraine had claimed that it killed 3,870 Russian soldiers in a single attack.
Generated by AI and usually run anonymously, some of these sites "publish hundreds of articles a day," according to NewsGuard. There is a real risk that the Internet will soon be flooded by many more of these sites pumping out garbage that will inevitably congest search engines, with the result that reliable news reporting will struggle to make itself visible.
The modus operandi of these sites is very simple—maximize clicks while minimizing effort in order to optimize profit. "Many of the sites are saturated with advertisements," says NewsGuard, "indicating that they were likely designed to generate revenue from programmatic ads—ads that are placed algorithmically across the web."
"Advertisers have a huge responsibility," RSF continued. "These content farms will inevitably proliferate if they can continue to make money from advertising. The ad industry must give a firm undertaking to ensure ads are placed above all with media that are reliable news sources."
The watchdog also urged the ad industry "to manage programmatic advertising mechanisms responsibly and to acquire the monitoring and control tools needed to ensure that these content farms do not become profitable."
RSF is pushing advertisers to curb the rapid spread of automated clickbait just weeks after it warned in its annual press freedom report that the fast-growing, AI-powered "fake content industry" threatens to undermine fact-based journalism around the globe, which is already at risk due to old-fashioned violence against reporters, who are being jailed and killed at alarming rates.
It "should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," says a new statement signed by dozens of artificial intelligence critics and boosters.
On Tuesday, 80 artificial intelligence scientists and more than 200 "other notable figures" signed a statement that says "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
The one-sentence warning from the diverse group of scientists, engineers, corporate executives, academics, and other concerned individuals doesn't go into detail about the existential threats posed by AI. Instead, it seeks to "open up discussion" and "create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously," according to the Center for AI Safety, a U.S.-based nonprofit whose website hosts the statement.
Lead signatory Geoffrey Hinton, often called "the godfather of AI," has been sounding the alarm for weeks. Earlier this month, the 75-year-old professor emeritus of computer science at the University of Toronto announced that he had resigned from his job at Google in order to speak more freely about the dangers associated with AI.
Before he quit Google, Hinton told CBS News in March that the rapidly advancing technology's potential impacts are comparable to "the Industrial Revolution, or electricity, or maybe the wheel."
Asked about the chances of the technology "wiping out humanity," Hinton warned that "it's not inconceivable."
That frightening potential doesn't necessarily lie with currently existing AI tools such as ChatGPT, but rather with what is called "artificial general intelligence" (AGI), which would encompass computers developing and acting on their own ideas.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI," Hinton told CBS News. "Now I think it may be 20 years or less."
Pressed by the outlet if it could happen sooner, Hinton conceded that he wouldn't rule out the possibility of AGI arriving within five years, a significant change from a few years ago when he "would have said, 'No way.'"
"We have to think hard about how to control that," said Hinton. Asked if that's possible, Hinton said, "We don't know, we haven't been there yet, but we can try."
The AI pioneer is far from alone. According to the 2023 AI Index Report, an annual assessment of the fast-growing industry published last month by the Stanford Institute for Human-Centered Artificial Intelligence, 57% of computer scientists surveyed said that "recent progress is moving us toward AGI," and 58% agreed that "AGI is an important concern."
Although its findings were released in mid-April, Stanford's survey of 327 experts in natural language processing—a branch of computer science essential to the development of chatbots—was conducted last May and June, months before OpenAI's ChatGPT burst onto the scene in November.
OpenAI CEO Sam Altman, who signed the statement shared Tuesday by the Center for AI Safety, wrote in a February blog post: "The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world."
The following month, however, Altman declined to sign an open letter calling for a half-year moratorium on training AI systems beyond the level of OpenAI's latest chatbot, GPT-4.
The letter, published in March, states that "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
Tesla and Twitter CEO Elon Musk was among those who called for a pause two months ago, but he is "developing plans to launch a new artificial intelligence start-up to compete with" OpenAI, according toThe Financial Times, begging the question of whether his stated concern about the technology's "profound risks to society and humanity" is sincere or an expression of self-interest.
That Altman and several other AI boosters signed Tuesday's statement raises the possibility that insiders with billions of dollars at stake are attempting to showcase their awareness of the risks posed by their products in a bid to persuade officials of their capacity for self-regulation.
Demands from outside the industry for robust government regulation of AI are growing. While ever-more dangerous forms of AGI may still be years away, there is already mounting evidence that existing AI tools are exacerbating the spread of disinformation, from chatbots spouting lies and face-swapping apps generating fake videos to cloned voices committing fraud. Current, untested AI is hurting people in other ways, including when automated technologies deployed by Medicare Advantage insurers unilaterally decide to end payments, resulting in the premature termination of coverage for vulnerable seniors.
Critics have warned that in the absence of swift interventions from policymakers, unregulated AI could harm additional healthcare patients, undermine fact-based journalism, hasten the destruction of democracy, and lead to an unintended nuclear war. Other common worries include widespread worker layoffs and worsening inequality as well as a massive uptick in carbon pollution.
A report published last month by Public Citizen argues that "until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause."
"Businesses are deploying potentially dangerous AI tools faster than their harms can be understood or mitigated," the progressive advocacy group warned in a statement.
"History offers no reason to believe that corporations can self-regulate away the known risks—especially since many of these risks are as much a part of generative AI as they are of corporate greed," the watchdog continued. "Businesses rushing to introduce these new technologies are gambling with peoples' lives and livelihoods, and arguably with the very foundations of a free society and livable world."
Earlier this month, Public Citizen president Robert Weissman welcomed the Biden administration's new plan to "promote responsible American innovation in artificial intelligence and protect people's rights and safety," but he also stressed the need for "more aggressive measures" to "address the threats of runaway corporate AI."
Echoing Public Citizen, an international group of doctors warned three weeks ago in the peer-reviewed journal BMJ Open Health that AI "could pose an existential threat to humanity" and demanded a moratorium on the development of such technology pending strong government oversight.
AI "poses a number of threats to human health and well-being," the physicians and related experts wrote. "With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing."