SUBSCRIBE TO OUR FREE NEWSLETTER

SUBSCRIBE TO OUR FREE NEWSLETTER

Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

* indicates required
5
#000000
#FFFFFF
artificial intelligence

Two dozen experts have released documents urging humanity to "address ongoing harms and anticipate emerging risks" associated with artificial intelligence.

(Photo: Monsitj/iStock via Getty Images)

Tech Experts Warn Humanity Must Act Now to Avoid 'Societal-Scale' Damage by AI

"It's time to get serious about advanced AI systems," said one computer science professor. "These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless."

Amid preparations for a global artificial intelligence safety summit in the United Kingdom, two dozen AI experts on Tuesday released a short paper and policy supplement urging humanity to "address ongoing harms and anticipate emerging risks" associated with the rapidly developing technology.

The experts—including Yoshua Bengio, Geoffrey Hinton, and Andrew Yao—wrote that "AI may be the technology that shapes this century. While AI capabilities are advancing rapidly, progress in safety and governance is lagging behind. To steer AI toward positive outcomes and away from catastrophe, we need to reorient. There is a responsible path, if we have the wisdom to take it."

Already, "high deep learning systems can write software, generate photorealistic scenes on demand, advise on intellectual topics, and combine language and image processing to steer robots," they noted, stressing how much advancement has come in just the past few years. "There is no fundamental reason why AI progress would slow or halt at the human level."

"Once autonomous AI systems pursue undesirable goals, embedded by malicious actors or by accident, we may be unable to keep them in check."

Given that "AI systems could rapidly come to outperform humans in an increasing number of tasks," the experts warned, "if such systems are not carefully designed and deployed, they pose a range of societal-scale risks."

"They threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society," the experts wrote. "They could also enable large-scale criminal or terrorist activities. Especially in the hands of a few powerful actors, AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance."

"Many of these risks could soon be amplified, and new risks created, as companies are developing autonomous AI: systems that can plan, act in the world, and pursue goals," they highlighted. "Once autonomous AI systems pursue undesirable goals, embedded by malicious actors or by accident, we may be unable to keep them in check."

"AI assistants are already co-writing a large share of computer code worldwide; future AI systems could insert and then exploit security vulnerabilities to control the computer systems behind our communication, media, banking, supply chains, militaries, and governments," they explained. "In open conflict, AI systems could threaten with or use autonomous or biological weapons. AI having access to such technology would merely continue existing trends to automate military activity, biological research, and AI development itself. If AI systems pursued such strategies with sufficient skill, it would be difficult for humans to intervene."

The experts asserted that until sufficient regulations exist, major companies should "lay out if-then commitments: specific safety measures they will take if specific red-line capabilities are found in their AI systems." They are also calling on tech giants and public funders to put at least a third of their artificial intelligence research and development budgets toward "ensuring safety and ethical use, comparable to their funding for AI capabilities."

Meanwhile, policymakers must get to work. According to the experts:

To keep up with rapid progress and avoid inflexible laws, national institutions need strong technical expertise and the authority to act swiftly. To address international race dynamics, they need the affordance to facilitate international agreements and partnerships. To protect low-risk use and academic research, they should avoid undue bureaucratic hurdles for small and predictable AI models. The most pressing scrutiny should be on AI systems at the frontier: a small number of most powerful AI systems—trained on billion-dollar supercomputers—which will have the most hazardous and unpredictable capabilities.

To enable effective regulation, governments urgently need comprehensive insight into AI development. Regulators should require model registration, whistleblower protections, incident reporting, and monitoring of model development and supercomputer usage. Regulators also need access to advanced AI systems before deployment to evaluate them for dangerous capabilities such as autonomous self-replication, breaking into computer systems, or making pandemic pathogens widely accessible.

The experts also advocated for holding frontier AI developers and owners legally accountable for harms "that can be reasonably foreseen and prevented." As for future systems that could evade human control, they wrote, "governments must be prepared to license their development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready."

Stuart Russell, one of the experts behind the documents and a computer science professor at the University of California, Berkeley, toldThe Guardian that "there are more regulations on sandwich shops than there are on AI companies."

"It's time to get serious about advanced AI systems," Russell said. "These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless."

In the United States, President Joe Biden plans to soon unveil an AI executive order, and U.S. Sens. Brian Schatz (D-Hawaii) and John Kennedy (R-La.) on Tuesday introduced a generative artificial intelligence bill welcomed by advocates.

"Generative AI threatens to plunge us into a world of fraud, deceit, disinformation, and confusion on a never-before-seen scale," said Public Citizen's Richard Anthony. "The Schatz-Kennedy AI Labeling Act would steer us away from this dystopian future by ensuring we can distinguish between content from humans and content from machines."

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.