SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies," Public Citizen's Robert Weissman argued.
As the White House on Thursday unveiled a plan meant to promote "responsible American innovation in artificial intelligence," a leading U.S. consumer advocate added his voice to the growing number of experts calling for a moratorium on the development and deployment of advanced AI technology.
"Today's announcement from the White House is a useful step forward, but much more is needed to address the threats of runaway corporate AI," Robert Weissman, president of the consumer advocacy group Public Citizen, said in a statement.
"But we also need more aggressive measures," Weissman asserted. "President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies, to remain in effect until there is a robust regulatory framework in place to address generative AI's enormous risks."
\u201cAt this point, Big Tech needs to be saved from itself. \n\nIt makes no sense for We the People to just sit by and hope their competitive arms race on generative AI works out.\n\nThe US govt must impose a moratorium on new generative AI technologies.\n\nhttps://t.co/L2TuAkDkGk\u201d— Robert Weissman (@Robert Weissman) 1683213243
The White House says its AI plan builds on steps the Biden administration has taken "to promote responsible innovation."
"These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year," the administration said.
The White House plan includes $140 million in National Science Foundation funding for seven new national AI research institutes—there are already 25 such facilities—that "catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good."
The new plan also includes "an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems."
Representatives of some of those companies including Google, Microsoft, Anthropic, and OpenAI—creator of the popular ChatGPT chatbot—met with Vice President Kamala Harris and other administration officials at the White House on Thursday. According toThe New York Times, President Joe Biden "briefly" dropped in on the meeting.
\u201cThis is a big deal: The @WhiteHouse will be issuing guidance on the use of AI systems by the government.\n\nThis, along with everything else they announced today, must be centered on the #AIBillOfRights and developed through meaningful community engagement. https://t.co/CDxamaxWEm\u201d— The Leadership Conference (@The Leadership Conference) 1683212998
"AI is one of today's most powerful technologies, with the potential to improve people's lives and tackle some of society's biggest challenges. At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy," Harris said in a statement.
"The private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products," she added.
\u201cIt strikes me that this meeting would be much more honest & productive with at least one critical #AI expert in attendance. https://t.co/JjdLB6z9wi\u201d— Elizabeth M. Renieris (@Elizabeth M. Renieris) 1683201372
Thursday's White House meeting and plan come amid mounting concerns over the potential dangers posed by artificial intelligence on a range of issues, including military applications, life-and-death healthcare decisions, and impacts on the labor force.
In late March, tech leaders and researchers led an open letter signed by more than 27,000 experts, scholars, and others urging "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
Noting that AI developers are "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control," the letter asks:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?
"Such decisions must not be delegated to unelected tech leaders," the signers asserted. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
\u201cso what should we do about this?\n\nI think the first step is that we have to slow down research for now\n\nit's near impossible to uninvent something, and we have to tread very lightly when playing with this kind of power\n\nI'm not alone in this, by the way\nhttps://t.co/l2eAA1FOAf\u201d— Freya Holm\u00e9r (@Freya Holm\u00e9r) 1683069630
Last month, Public Citizen argued that "until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause."
"These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," the group said in a report. "However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment."
According to the annual AI Index Report published last month by the Stanford Institute for Human-Centered Artificial Intelligence, nearly three-quarters of researchers believe artificial intelligence "could soon lead to revolutionary social change," while 36% worry that AI decisions "could cause nuclear-level catastrophe."
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
As the White House on Thursday unveiled a plan meant to promote "responsible American innovation in artificial intelligence," a leading U.S. consumer advocate added his voice to the growing number of experts calling for a moratorium on the development and deployment of advanced AI technology.
"Today's announcement from the White House is a useful step forward, but much more is needed to address the threats of runaway corporate AI," Robert Weissman, president of the consumer advocacy group Public Citizen, said in a statement.
"But we also need more aggressive measures," Weissman asserted. "President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies, to remain in effect until there is a robust regulatory framework in place to address generative AI's enormous risks."
\u201cAt this point, Big Tech needs to be saved from itself. \n\nIt makes no sense for We the People to just sit by and hope their competitive arms race on generative AI works out.\n\nThe US govt must impose a moratorium on new generative AI technologies.\n\nhttps://t.co/L2TuAkDkGk\u201d— Robert Weissman (@Robert Weissman) 1683213243
The White House says its AI plan builds on steps the Biden administration has taken "to promote responsible innovation."
"These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year," the administration said.
The White House plan includes $140 million in National Science Foundation funding for seven new national AI research institutes—there are already 25 such facilities—that "catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good."
The new plan also includes "an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems."
Representatives of some of those companies including Google, Microsoft, Anthropic, and OpenAI—creator of the popular ChatGPT chatbot—met with Vice President Kamala Harris and other administration officials at the White House on Thursday. According toThe New York Times, President Joe Biden "briefly" dropped in on the meeting.
\u201cThis is a big deal: The @WhiteHouse will be issuing guidance on the use of AI systems by the government.\n\nThis, along with everything else they announced today, must be centered on the #AIBillOfRights and developed through meaningful community engagement. https://t.co/CDxamaxWEm\u201d— The Leadership Conference (@The Leadership Conference) 1683212998
"AI is one of today's most powerful technologies, with the potential to improve people's lives and tackle some of society's biggest challenges. At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy," Harris said in a statement.
"The private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products," she added.
\u201cIt strikes me that this meeting would be much more honest & productive with at least one critical #AI expert in attendance. https://t.co/JjdLB6z9wi\u201d— Elizabeth M. Renieris (@Elizabeth M. Renieris) 1683201372
Thursday's White House meeting and plan come amid mounting concerns over the potential dangers posed by artificial intelligence on a range of issues, including military applications, life-and-death healthcare decisions, and impacts on the labor force.
In late March, tech leaders and researchers led an open letter signed by more than 27,000 experts, scholars, and others urging "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
Noting that AI developers are "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control," the letter asks:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?
"Such decisions must not be delegated to unelected tech leaders," the signers asserted. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
\u201cso what should we do about this?\n\nI think the first step is that we have to slow down research for now\n\nit's near impossible to uninvent something, and we have to tread very lightly when playing with this kind of power\n\nI'm not alone in this, by the way\nhttps://t.co/l2eAA1FOAf\u201d— Freya Holm\u00e9r (@Freya Holm\u00e9r) 1683069630
Last month, Public Citizen argued that "until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause."
"These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," the group said in a report. "However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment."
According to the annual AI Index Report published last month by the Stanford Institute for Human-Centered Artificial Intelligence, nearly three-quarters of researchers believe artificial intelligence "could soon lead to revolutionary social change," while 36% worry that AI decisions "could cause nuclear-level catastrophe."
As the White House on Thursday unveiled a plan meant to promote "responsible American innovation in artificial intelligence," a leading U.S. consumer advocate added his voice to the growing number of experts calling for a moratorium on the development and deployment of advanced AI technology.
"Today's announcement from the White House is a useful step forward, but much more is needed to address the threats of runaway corporate AI," Robert Weissman, president of the consumer advocacy group Public Citizen, said in a statement.
"But we also need more aggressive measures," Weissman asserted. "President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies, to remain in effect until there is a robust regulatory framework in place to address generative AI's enormous risks."
\u201cAt this point, Big Tech needs to be saved from itself. \n\nIt makes no sense for We the People to just sit by and hope their competitive arms race on generative AI works out.\n\nThe US govt must impose a moratorium on new generative AI technologies.\n\nhttps://t.co/L2TuAkDkGk\u201d— Robert Weissman (@Robert Weissman) 1683213243
The White House says its AI plan builds on steps the Biden administration has taken "to promote responsible innovation."
"These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year," the administration said.
The White House plan includes $140 million in National Science Foundation funding for seven new national AI research institutes—there are already 25 such facilities—that "catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good."
The new plan also includes "an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems."
Representatives of some of those companies including Google, Microsoft, Anthropic, and OpenAI—creator of the popular ChatGPT chatbot—met with Vice President Kamala Harris and other administration officials at the White House on Thursday. According toThe New York Times, President Joe Biden "briefly" dropped in on the meeting.
\u201cThis is a big deal: The @WhiteHouse will be issuing guidance on the use of AI systems by the government.\n\nThis, along with everything else they announced today, must be centered on the #AIBillOfRights and developed through meaningful community engagement. https://t.co/CDxamaxWEm\u201d— The Leadership Conference (@The Leadership Conference) 1683212998
"AI is one of today's most powerful technologies, with the potential to improve people's lives and tackle some of society's biggest challenges. At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy," Harris said in a statement.
"The private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products," she added.
\u201cIt strikes me that this meeting would be much more honest & productive with at least one critical #AI expert in attendance. https://t.co/JjdLB6z9wi\u201d— Elizabeth M. Renieris (@Elizabeth M. Renieris) 1683201372
Thursday's White House meeting and plan come amid mounting concerns over the potential dangers posed by artificial intelligence on a range of issues, including military applications, life-and-death healthcare decisions, and impacts on the labor force.
In late March, tech leaders and researchers led an open letter signed by more than 27,000 experts, scholars, and others urging "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
Noting that AI developers are "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control," the letter asks:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?
"Such decisions must not be delegated to unelected tech leaders," the signers asserted. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
\u201cso what should we do about this?\n\nI think the first step is that we have to slow down research for now\n\nit's near impossible to uninvent something, and we have to tread very lightly when playing with this kind of power\n\nI'm not alone in this, by the way\nhttps://t.co/l2eAA1FOAf\u201d— Freya Holm\u00e9r (@Freya Holm\u00e9r) 1683069630
Last month, Public Citizen argued that "until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause."
"These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," the group said in a report. "However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment."
According to the annual AI Index Report published last month by the Stanford Institute for Human-Centered Artificial Intelligence, nearly three-quarters of researchers believe artificial intelligence "could soon lead to revolutionary social change," while 36% worry that AI decisions "could cause nuclear-level catastrophe."