SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
Advocacy groups and experts are pressuring Congress and federal regulators to "put meaningful, enforceable guardrails in place."
Amid rising global fears about the dangers of artificial intelligence, campaigners and experts applauded U.S. President Joe Biden's administration on Friday for securing voluntary risk management commitments from seven leading AI companies while also emphasizing the need for much more from lawmakers and regulators.
"I'm very happy to see this modest, but necessary, step on the way to proper governance of AI. It is all voluntary at this stage, yet good to get these norms agreed. Hopefully it is a step on a much longer path," said Toby Ord, a senior research fellow at the U.K.'s University of Oxford and author of The Precipice: Existential Risk and the Future of Humanity.
Rob Reich, a faculty associate director at Stanford University's Institute for Human-Centered Artificial Intelligence, tweeted that "this is a big step forward for AI governance," and it is "great to see" Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI "coordinating on baseline norms of responsible AI development."
"We need enforceable accountability measures and requirements to roll out AI responsibly and mitigate the risks and potential harms to individuals, including bias and discrimination."
Alexandra Reeve Givens, CEO of the Center for Democracy & Technology (CDT), called the announcement "a welcome step toward promoting trustworthy and secure AI systems."
"Red team testing, information sharing, and transparency around risks are all essential elements of achieving AI safety," Reeve Givens said. "The commitment to develop mechanisms to disclose to users when content is AI-generated offers the potential to reduce fraud and mis- and disinformation."
"These voluntary undertakings are only a first step. We need enforceable accountability measures and requirements to roll out AI responsibly and mitigate the risks and potential harms to individuals, including bias and discrimination," she stressed. "CDT looks forward to continuing to work with the administration and Congress in putting these safeguards in place."
Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center (EPIC), had a similar response.
"While EPIC appreciates the Biden administration's use of its authorities to place safeguards on the use of artificial intelligence, we both agree that voluntary commitments are not enough when it comes to Big Tech," she said. "Congress and federal regulators must put meaningful, enforceable guardrails in place to ensure the use of AI is fair, transparent, and protects individuals' privacy and civil rights."
Biden brought together leaders from the companies to announce eight commitments that the White House said "underscore three principles that must be fundamental to the future of AI: safety, security, and trust."
As the White House outlined, the firms are pledging to:
"There is much more work underway," according to a White House fact sheet, which says the "administration is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation."
Brown University computer and data science professor Suresh Venkatasubramania, a former Biden tech adviser who helped co-author the administration's Blueprint for an AI Bill of Rights, said in a series of tweets about the Friday agreement that "on process, there's good stuff here," but "on content, it's a bit of a mixed bag."
While recognizing the need for additional action, Venkatasubramania also said that voluntary efforts help show that "adding guardrails in the development of public-facing systems isn't the end of the world or even the end of innovation."
The White House fact sheet says that "as we advance this agenda at home, the administration will work with allies and partners to establish a strong international framework to govern the development and use of AI. It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the U.K."
Gabriela Zanfir-Fortuna of the Future of Privacy Forum pointed out that the European Union was not listed as a partner.
As Common Dreams reported last month, the European Parliament passed a draft law that would strictly regulate the use of artificial intelligence, and now, members of the legislative body are negotiating a final version with the E.U.'s executive institutions.
The fact sheet adds that "the United States seeks to ensure that these commitments support and complement Japan's leadership of the G7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom's leadership in hosting a Summit on AI Safety, and India's leadership as chair of the Global Partnership on AI."
Noting that portion of the document, Zanfir-Fortuna tweeted: "What is missing from the list? The Council of Europe's ongoing process to adopt an international agreement on AI."
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
Amid rising global fears about the dangers of artificial intelligence, campaigners and experts applauded U.S. President Joe Biden's administration on Friday for securing voluntary risk management commitments from seven leading AI companies while also emphasizing the need for much more from lawmakers and regulators.
"I'm very happy to see this modest, but necessary, step on the way to proper governance of AI. It is all voluntary at this stage, yet good to get these norms agreed. Hopefully it is a step on a much longer path," said Toby Ord, a senior research fellow at the U.K.'s University of Oxford and author of The Precipice: Existential Risk and the Future of Humanity.
Rob Reich, a faculty associate director at Stanford University's Institute for Human-Centered Artificial Intelligence, tweeted that "this is a big step forward for AI governance," and it is "great to see" Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI "coordinating on baseline norms of responsible AI development."
"We need enforceable accountability measures and requirements to roll out AI responsibly and mitigate the risks and potential harms to individuals, including bias and discrimination."
Alexandra Reeve Givens, CEO of the Center for Democracy & Technology (CDT), called the announcement "a welcome step toward promoting trustworthy and secure AI systems."
"Red team testing, information sharing, and transparency around risks are all essential elements of achieving AI safety," Reeve Givens said. "The commitment to develop mechanisms to disclose to users when content is AI-generated offers the potential to reduce fraud and mis- and disinformation."
"These voluntary undertakings are only a first step. We need enforceable accountability measures and requirements to roll out AI responsibly and mitigate the risks and potential harms to individuals, including bias and discrimination," she stressed. "CDT looks forward to continuing to work with the administration and Congress in putting these safeguards in place."
Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center (EPIC), had a similar response.
"While EPIC appreciates the Biden administration's use of its authorities to place safeguards on the use of artificial intelligence, we both agree that voluntary commitments are not enough when it comes to Big Tech," she said. "Congress and federal regulators must put meaningful, enforceable guardrails in place to ensure the use of AI is fair, transparent, and protects individuals' privacy and civil rights."
Biden brought together leaders from the companies to announce eight commitments that the White House said "underscore three principles that must be fundamental to the future of AI: safety, security, and trust."
As the White House outlined, the firms are pledging to:
"There is much more work underway," according to a White House fact sheet, which says the "administration is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation."
Brown University computer and data science professor Suresh Venkatasubramania, a former Biden tech adviser who helped co-author the administration's Blueprint for an AI Bill of Rights, said in a series of tweets about the Friday agreement that "on process, there's good stuff here," but "on content, it's a bit of a mixed bag."
While recognizing the need for additional action, Venkatasubramania also said that voluntary efforts help show that "adding guardrails in the development of public-facing systems isn't the end of the world or even the end of innovation."
The White House fact sheet says that "as we advance this agenda at home, the administration will work with allies and partners to establish a strong international framework to govern the development and use of AI. It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the U.K."
Gabriela Zanfir-Fortuna of the Future of Privacy Forum pointed out that the European Union was not listed as a partner.
As Common Dreams reported last month, the European Parliament passed a draft law that would strictly regulate the use of artificial intelligence, and now, members of the legislative body are negotiating a final version with the E.U.'s executive institutions.
The fact sheet adds that "the United States seeks to ensure that these commitments support and complement Japan's leadership of the G7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom's leadership in hosting a Summit on AI Safety, and India's leadership as chair of the Global Partnership on AI."
Noting that portion of the document, Zanfir-Fortuna tweeted: "What is missing from the list? The Council of Europe's ongoing process to adopt an international agreement on AI."
Amid rising global fears about the dangers of artificial intelligence, campaigners and experts applauded U.S. President Joe Biden's administration on Friday for securing voluntary risk management commitments from seven leading AI companies while also emphasizing the need for much more from lawmakers and regulators.
"I'm very happy to see this modest, but necessary, step on the way to proper governance of AI. It is all voluntary at this stage, yet good to get these norms agreed. Hopefully it is a step on a much longer path," said Toby Ord, a senior research fellow at the U.K.'s University of Oxford and author of The Precipice: Existential Risk and the Future of Humanity.
Rob Reich, a faculty associate director at Stanford University's Institute for Human-Centered Artificial Intelligence, tweeted that "this is a big step forward for AI governance," and it is "great to see" Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI "coordinating on baseline norms of responsible AI development."
"We need enforceable accountability measures and requirements to roll out AI responsibly and mitigate the risks and potential harms to individuals, including bias and discrimination."
Alexandra Reeve Givens, CEO of the Center for Democracy & Technology (CDT), called the announcement "a welcome step toward promoting trustworthy and secure AI systems."
"Red team testing, information sharing, and transparency around risks are all essential elements of achieving AI safety," Reeve Givens said. "The commitment to develop mechanisms to disclose to users when content is AI-generated offers the potential to reduce fraud and mis- and disinformation."
"These voluntary undertakings are only a first step. We need enforceable accountability measures and requirements to roll out AI responsibly and mitigate the risks and potential harms to individuals, including bias and discrimination," she stressed. "CDT looks forward to continuing to work with the administration and Congress in putting these safeguards in place."
Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center (EPIC), had a similar response.
"While EPIC appreciates the Biden administration's use of its authorities to place safeguards on the use of artificial intelligence, we both agree that voluntary commitments are not enough when it comes to Big Tech," she said. "Congress and federal regulators must put meaningful, enforceable guardrails in place to ensure the use of AI is fair, transparent, and protects individuals' privacy and civil rights."
Biden brought together leaders from the companies to announce eight commitments that the White House said "underscore three principles that must be fundamental to the future of AI: safety, security, and trust."
As the White House outlined, the firms are pledging to:
"There is much more work underway," according to a White House fact sheet, which says the "administration is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation."
Brown University computer and data science professor Suresh Venkatasubramania, a former Biden tech adviser who helped co-author the administration's Blueprint for an AI Bill of Rights, said in a series of tweets about the Friday agreement that "on process, there's good stuff here," but "on content, it's a bit of a mixed bag."
While recognizing the need for additional action, Venkatasubramania also said that voluntary efforts help show that "adding guardrails in the development of public-facing systems isn't the end of the world or even the end of innovation."
The White House fact sheet says that "as we advance this agenda at home, the administration will work with allies and partners to establish a strong international framework to govern the development and use of AI. It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the U.K."
Gabriela Zanfir-Fortuna of the Future of Privacy Forum pointed out that the European Union was not listed as a partner.
As Common Dreams reported last month, the European Parliament passed a draft law that would strictly regulate the use of artificial intelligence, and now, members of the legislative body are negotiating a final version with the E.U.'s executive institutions.
The fact sheet adds that "the United States seeks to ensure that these commitments support and complement Japan's leadership of the G7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom's leadership in hosting a Summit on AI Safety, and India's leadership as chair of the Global Partnership on AI."
Noting that portion of the document, Zanfir-Fortuna tweeted: "What is missing from the list? The Council of Europe's ongoing process to adopt an international agreement on AI."