SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"It's time to get serious about advanced AI systems," said one computer science professor. "These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless."
Amid preparations for a global artificial intelligence safety summit in the United Kingdom, two dozen AI experts on Tuesday released a short paper and policy supplement urging humanity to "address ongoing harms and anticipate emerging risks" associated with the rapidly developing technology.
The experts—including Yoshua Bengio, Geoffrey Hinton, and Andrew Yao—wrote that "AI may be the technology that shapes this century. While AI capabilities are advancing rapidly, progress in safety and governance is lagging behind. To steer AI toward positive outcomes and away from catastrophe, we need to reorient. There is a responsible path, if we have the wisdom to take it."
Already, "high deep learning systems can write software, generate photorealistic scenes on demand, advise on intellectual topics, and combine language and image processing to steer robots," they noted, stressing how much advancement has come in just the past few years. "There is no fundamental reason why AI progress would slow or halt at the human level."
"Once autonomous AI systems pursue undesirable goals, embedded by malicious actors or by accident, we may be unable to keep them in check."
Given that "AI systems could rapidly come to outperform humans in an increasing number of tasks," the experts warned, "if such systems are not carefully designed and deployed, they pose a range of societal-scale risks."
"They threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society," the experts wrote. "They could also enable large-scale criminal or terrorist activities. Especially in the hands of a few powerful actors, AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance."
"Many of these risks could soon be amplified, and new risks created, as companies are developing autonomous AI: systems that can plan, act in the world, and pursue goals," they highlighted. "Once autonomous AI systems pursue undesirable goals, embedded by malicious actors or by accident, we may be unable to keep them in check."
"AI assistants are already co-writing a large share of computer code worldwide; future AI systems could insert and then exploit security vulnerabilities to control the computer systems behind our communication, media, banking, supply chains, militaries, and governments," they explained. "In open conflict, AI systems could threaten with or use autonomous or biological weapons. AI having access to such technology would merely continue existing trends to automate military activity, biological research, and AI development itself. If AI systems pursued such strategies with sufficient skill, it would be difficult for humans to intervene."
The experts asserted that until sufficient regulations exist, major companies should "lay out if-then commitments: specific safety measures they will take if specific red-line capabilities are found in their AI systems." They are also calling on tech giants and public funders to put at least a third of their artificial intelligence research and development budgets toward "ensuring safety and ethical use, comparable to their funding for AI capabilities."
Meanwhile, policymakers must get to work. According to the experts:
To keep up with rapid progress and avoid inflexible laws, national institutions need strong technical expertise and the authority to act swiftly. To address international race dynamics, they need the affordance to facilitate international agreements and partnerships. To protect low-risk use and academic research, they should avoid undue bureaucratic hurdles for small and predictable AI models. The most pressing scrutiny should be on AI systems at the frontier: a small number of most powerful AI systems—trained on billion-dollar supercomputers—which will have the most hazardous and unpredictable capabilities.
To enable effective regulation, governments urgently need comprehensive insight into AI development. Regulators should require model registration, whistleblower protections, incident reporting, and monitoring of model development and supercomputer usage. Regulators also need access to advanced AI systems before deployment to evaluate them for dangerous capabilities such as autonomous self-replication, breaking into computer systems, or making pandemic pathogens widely accessible.
The experts also advocated for holding frontier AI developers and owners legally accountable for harms "that can be reasonably foreseen and prevented." As for future systems that could evade human control, they wrote, "governments must be prepared to license their development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready."
Stuart Russell, one of the experts behind the documents and a computer science professor at the University of California, Berkeley, toldThe Guardian that "there are more regulations on sandwich shops than there are on AI companies."
"It's time to get serious about advanced AI systems," Russell said. "These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless."
In the United States, President Joe Biden plans to soon unveil an AI executive order, and U.S. Sens. Brian Schatz (D-Hawaii) and John Kennedy (R-La.) on Tuesday introduced a generative artificial intelligence bill welcomed by advocates.
"Generative AI threatens to plunge us into a world of fraud, deceit, disinformation, and confusion on a never-before-seen scale," said Public Citizen's Richard Anthony. "The Schatz-Kennedy AI Labeling Act would steer us away from this dystopian future by ensuring we can distinguish between content from humans and content from machines."
It "should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," says a new statement signed by dozens of artificial intelligence critics and boosters.
On Tuesday, 80 artificial intelligence scientists and more than 200 "other notable figures" signed a statement that says "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
The one-sentence warning from the diverse group of scientists, engineers, corporate executives, academics, and other concerned individuals doesn't go into detail about the existential threats posed by AI. Instead, it seeks to "open up discussion" and "create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously," according to the Center for AI Safety, a U.S.-based nonprofit whose website hosts the statement.
Lead signatory Geoffrey Hinton, often called "the godfather of AI," has been sounding the alarm for weeks. Earlier this month, the 75-year-old professor emeritus of computer science at the University of Toronto announced that he had resigned from his job at Google in order to speak more freely about the dangers associated with AI.
Before he quit Google, Hinton told CBS News in March that the rapidly advancing technology's potential impacts are comparable to "the Industrial Revolution, or electricity, or maybe the wheel."
Asked about the chances of the technology "wiping out humanity," Hinton warned that "it's not inconceivable."
That frightening potential doesn't necessarily lie with currently existing AI tools such as ChatGPT, but rather with what is called "artificial general intelligence" (AGI), which would encompass computers developing and acting on their own ideas.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI," Hinton told CBS News. "Now I think it may be 20 years or less."
Pressed by the outlet if it could happen sooner, Hinton conceded that he wouldn't rule out the possibility of AGI arriving within five years, a significant change from a few years ago when he "would have said, 'No way.'"
"We have to think hard about how to control that," said Hinton. Asked if that's possible, Hinton said, "We don't know, we haven't been there yet, but we can try."
The AI pioneer is far from alone. According to the 2023 AI Index Report, an annual assessment of the fast-growing industry published last month by the Stanford Institute for Human-Centered Artificial Intelligence, 57% of computer scientists surveyed said that "recent progress is moving us toward AGI," and 58% agreed that "AGI is an important concern."
Although its findings were released in mid-April, Stanford's survey of 327 experts in natural language processing—a branch of computer science essential to the development of chatbots—was conducted last May and June, months before OpenAI's ChatGPT burst onto the scene in November.
OpenAI CEO Sam Altman, who signed the statement shared Tuesday by the Center for AI Safety, wrote in a February blog post: "The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world."
The following month, however, Altman declined to sign an open letter calling for a half-year moratorium on training AI systems beyond the level of OpenAI's latest chatbot, GPT-4.
The letter, published in March, states that "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
Tesla and Twitter CEO Elon Musk was among those who called for a pause two months ago, but he is "developing plans to launch a new artificial intelligence start-up to compete with" OpenAI, according toThe Financial Times, begging the question of whether his stated concern about the technology's "profound risks to society and humanity" is sincere or an expression of self-interest.
That Altman and several other AI boosters signed Tuesday's statement raises the possibility that insiders with billions of dollars at stake are attempting to showcase their awareness of the risks posed by their products in a bid to persuade officials of their capacity for self-regulation.
Demands from outside the industry for robust government regulation of AI are growing. While ever-more dangerous forms of AGI may still be years away, there is already mounting evidence that existing AI tools are exacerbating the spread of disinformation, from chatbots spouting lies and face-swapping apps generating fake videos to cloned voices committing fraud. Current, untested AI is hurting people in other ways, including when automated technologies deployed by Medicare Advantage insurers unilaterally decide to end payments, resulting in the premature termination of coverage for vulnerable seniors.
Critics have warned that in the absence of swift interventions from policymakers, unregulated AI could harm additional healthcare patients, undermine fact-based journalism, hasten the destruction of democracy, and lead to an unintended nuclear war. Other common worries include widespread worker layoffs and worsening inequality as well as a massive uptick in carbon pollution.
A report published last month by Public Citizen argues that "until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause."
"Businesses are deploying potentially dangerous AI tools faster than their harms can be understood or mitigated," the progressive advocacy group warned in a statement.
"History offers no reason to believe that corporations can self-regulate away the known risks—especially since many of these risks are as much a part of generative AI as they are of corporate greed," the watchdog continued. "Businesses rushing to introduce these new technologies are gambling with peoples' lives and livelihoods, and arguably with the very foundations of a free society and livable world."
Earlier this month, Public Citizen president Robert Weissman welcomed the Biden administration's new plan to "promote responsible American innovation in artificial intelligence and protect people's rights and safety," but he also stressed the need for "more aggressive measures" to "address the threats of runaway corporate AI."
Echoing Public Citizen, an international group of doctors warned three weeks ago in the peer-reviewed journal BMJ Open Health that AI "could pose an existential threat to humanity" and demanded a moratorium on the development of such technology pending strong government oversight.
AI "poses a number of threats to human health and well-being," the physicians and related experts wrote. "With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing."