SUBSCRIBE TO OUR FREE NEWSLETTER

SUBSCRIBE TO OUR FREE NEWSLETTER

Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

* indicates required
5
#000000
#FFFFFF
Biden and leaders from top AI companies

U.S. President Joe Biden leaves the Roosevelt Room of the White House in Washington, D.C., on July 21, 2023, after speaking about artificial intelligence alongside leaders from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

(Photo: Andrew Caballero-Reynolds/AFP via Getty Images)

White House Deal for​ Voluntary AI Safeguards Called 'Great Step' But 'Not Enough'

Advocacy groups and experts are pressuring Congress and federal regulators to "put meaningful, enforceable guardrails in place."

Amid rising global fears about the dangers of artificial intelligence, campaigners and experts applauded U.S. President Joe Biden's administration on Friday for securing voluntary risk management commitments from seven leading AI companies while also emphasizing the need for much more from lawmakers and regulators.

"I'm very happy to see this modest, but necessary, step on the way to proper governance of AI. It is all voluntary at this stage, yet good to get these norms agreed. Hopefully it is a step on a much longer path," said Toby Ord, a senior research fellow at the U.K.'s University of Oxford and author of The Precipice: Existential Risk and the Future of Humanity.

Rob Reich, a faculty associate director at Stanford University's Institute for Human-Centered Artificial Intelligence, tweeted that "this is a big step forward for AI governance," and it is "great to see" Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI "coordinating on baseline norms of responsible AI development."

"We need enforceable accountability measures and requirements to roll out AI responsibly and mitigate the risks and potential harms to individuals, including bias and discrimination."

Alexandra Reeve Givens, CEO of the Center for Democracy & Technology (CDT), called the announcement "a welcome step toward promoting trustworthy and secure AI systems."

"Red team testing, information sharing, and transparency around risks are all essential elements of achieving AI safety," Reeve Givens said. "The commitment to develop mechanisms to disclose to users when content is AI-generated offers the potential to reduce fraud and mis- and disinformation."

"These voluntary undertakings are only a first step. We need enforceable accountability measures and requirements to roll out AI responsibly and mitigate the risks and potential harms to individuals, including bias and discrimination," she stressed. "CDT looks forward to continuing to work with the administration and Congress in putting these safeguards in place."

Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center (EPIC), had a similar response.

"While EPIC appreciates the Biden administration's use of its authorities to place safeguards on the use of artificial intelligence, we both agree that voluntary commitments are not enough when it comes to Big Tech," she said. "Congress and federal regulators must put meaningful, enforceable guardrails in place to ensure the use of AI is fair, transparent, and protects individuals' privacy and civil rights."

Biden brought together leaders from the companies to announce eight commitments that the White House said "underscore three principles that must be fundamental to the future of AI: safety, security, and trust."

As the White House outlined, the firms are pledging to:

  • Commit to internal and external red-teaming of models or systems in areas including misuse, societal risks, and national security concerns, such as bio, cyber, and other safety areas;
  • Work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards;
  • Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights;
  • Incent third-party discovery and reporting of issues and vulnerabilities;
  • Develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio or visual content;
  • Publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of societal risks, such as effects on fairness and bias;
  • Prioritize research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination, and protecting privacy; and
  • Develop and deploy frontier AI systems to help address society’s greatest challenges.

"There is much more work underway," according to a White House fact sheet, which says the "administration is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation."

Brown University computer and data science professor Suresh Venkatasubramania, a former Biden tech adviser who helped co-author the administration's Blueprint for an AI Bill of Rights, said in a series of tweets about the Friday agreement that "on process, there's good stuff here," but "on content, it's a bit of a mixed bag."

While recognizing the need for additional action, Venkatasubramania also said that voluntary efforts help show that "adding guardrails in the development of public-facing systems isn't the end of the world or even the end of innovation."

The White House fact sheet says that "as we advance this agenda at home, the administration will work with allies and partners to establish a strong international framework to govern the development and use of AI. It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the U.K."

Gabriela Zanfir-Fortuna of the Future of Privacy Forum pointed out that the European Union was not listed as a partner.

As Common Dreams reported last month, the European Parliament passed a draft law that would strictly regulate the use of artificial intelligence, and now, members of the legislative body are negotiating a final version with the E.U.'s executive institutions.

The fact sheet adds that "the United States seeks to ensure that these commitments support and complement Japan's leadership of the G7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom's leadership in hosting a Summit on AI Safety, and India's leadership as chair of the Global Partnership on AI."

Noting that portion of the document, Zanfir-Fortuna tweeted: "What is missing from the list? The Council of Europe's ongoing process to adopt an international agreement on AI."

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.