SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"Businesses are deploying potentially dangerous AI tools faster than their harms can be understood or mitigated," Public Citizen warns. "History offers no reason to believe that corporations can self-regulate away the known risks."
"Until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause."
So says a report on the dangers of artificial intelligence (AI) published Tuesday by Public Citizen. Titled Sorry in Advance! Rapid Rush to Deploy Generative AI Risks a Wide Array of Automated Harms, the analysis by researchers Rick Claypool and Cheyenne Hunt aims to "reframe the conversation around generative AI to ensure that the public and policymakers have a say in how these new technologies might upend our lives."
Following the November release of OpenAI's ChatGPT, generative AI tools have been receiving "a huge amount of buzz—especially among the Big Tech corporations best positioned to profit from them," the report notes. "The most enthusiastic boosters say AI will change the world in ways that make everyone rich—and some detractors say it could kill us all. Separate from frightening threats that may materialize as the technology evolves are real-world harms the rush to release and monetize these tools can cause—and, in many cases, is already causing."
Claypool and Hunt categorized these harms into "five broad areas of concern":
In a statement, Public Citizen warned that "businesses are deploying potentially dangerous AI tools faster than their harms can be understood or mitigated."
"History offers no reason to believe that corporations can self-regulate away the known risks—especially since many of these risks are as much a part of generative AI as they are of corporate greed," the statement continues. "Businesses rushing to introduce these new technologies are gambling with peoples' lives and livelihoods, and arguably with the very foundations of a free society and livable world."
On Thursday, April 27, Public Citizen is hosting a hybrid in-person/Zoom conference in Washington, D.C., during which U.S. Rep. Ted Lieu (D-Calif.) and 10 other panelists will discuss the threats posed by AI and how to rein in the rapidly growing yet virtually unregulated industry. People interested in participating must register by this Friday.
"Businesses rushing to introduce these new technologies are gambling with peoples' lives and livelihoods, and arguably with the very foundations of a free society and livable world."
Demands to regulate AI are mounting. Last month, Geoffrey Hinton, considered the "godfather of artificial intelligence," compared the quickly advancing technology's potential impacts to "the Industrial Revolution, or electricity, or maybe the wheel."
Asked by
CBS News' Brook Silva-Braga about the possibility of the technology "wiping out humanity," Hinton warned that "it's not inconceivable."
That frightening potential doesn't necessarily lie with existing AI tools such as ChatGPT, but rather with what is called "artificial general intelligence" (AGI), through which computers develop and act on their own ideas.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI," Hinton told CBS News. "Now I think it may be 20 years or less." Eventually, Hinton admitted that he wouldn't rule out the possibility of AGI arriving within five years—a major departure from a few years ago when he "would have said, 'No way.'"
"We have to think hard about how to control that," said Hinton. Asked by Silva-Braga if that's possible, Hinton said, "We don't know, we haven't been there yet, but we can try."
The AI pioneer is far from alone. In February, OpenAI CEO Sam Altman wrote in a company blog post: "The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world."
More than 26,000 people have signed a recently published open letter that calls for a six-month moratorium on training AI systems beyond the level of OpenAI's latest chatbot, GPT-4, although Altman is not among them.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," says the letter.
While AGI may still be a few years away, Public Citizen's new report makes clear that existing AI tools—including chatbots spewing lies, face-swapping apps generating fake videos, and cloned voices committing fraud—are already causing or threatening to cause serious harm, including intensifying inequality, undermining democracy, displacing workers, preying on consumers, and exacerbating the climate crisis.
These threats "are all very real and highly likely to occur if corporations are permitted to deploy generative AI without enforceable guardrails," Claypool and Hunt wrote. "But there is nothing inevitable about them."
They continued:
Government regulation can block companies from deploying the technologies too quickly (or block them altogether if they prove unsafe). It can set standards to protect people from the risks. It can impose duties on companies using generative AI to avoid identifiable harms, respect the interests of communities and creators, pretest their technologies, take responsibility, and accept liability if things go wrong. It can demand equity be built into the technologies. It can insist that if generative AI does, in fact, increase productivity and displace workers, or that the economic benefits be shared with those harmed and not be concentrated among a small circle of companies, executives, and investors.
Amid "growing regulatory interest" in an AI "accountability mechanism," the Biden administration announced last week that it is seeking public input on measures that could be implemented to ensure that "AI systems are legal, effective, ethical, safe, and otherwise trustworthy."
According toAxios, Senate Majority Leader Chuck Schumer (D-N.Y.) is "taking early steps toward legislation to regulate artificial intelligence technology."
In the words of Claypool and Hunt: "We need strong safeguards and government regulation—and we need them in place before corporations disseminate AI technology widely. Until then, we need a pause."
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
"Until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause."
So says a report on the dangers of artificial intelligence (AI) published Tuesday by Public Citizen. Titled Sorry in Advance! Rapid Rush to Deploy Generative AI Risks a Wide Array of Automated Harms, the analysis by researchers Rick Claypool and Cheyenne Hunt aims to "reframe the conversation around generative AI to ensure that the public and policymakers have a say in how these new technologies might upend our lives."
Following the November release of OpenAI's ChatGPT, generative AI tools have been receiving "a huge amount of buzz—especially among the Big Tech corporations best positioned to profit from them," the report notes. "The most enthusiastic boosters say AI will change the world in ways that make everyone rich—and some detractors say it could kill us all. Separate from frightening threats that may materialize as the technology evolves are real-world harms the rush to release and monetize these tools can cause—and, in many cases, is already causing."
Claypool and Hunt categorized these harms into "five broad areas of concern":
In a statement, Public Citizen warned that "businesses are deploying potentially dangerous AI tools faster than their harms can be understood or mitigated."
"History offers no reason to believe that corporations can self-regulate away the known risks—especially since many of these risks are as much a part of generative AI as they are of corporate greed," the statement continues. "Businesses rushing to introduce these new technologies are gambling with peoples' lives and livelihoods, and arguably with the very foundations of a free society and livable world."
On Thursday, April 27, Public Citizen is hosting a hybrid in-person/Zoom conference in Washington, D.C., during which U.S. Rep. Ted Lieu (D-Calif.) and 10 other panelists will discuss the threats posed by AI and how to rein in the rapidly growing yet virtually unregulated industry. People interested in participating must register by this Friday.
"Businesses rushing to introduce these new technologies are gambling with peoples' lives and livelihoods, and arguably with the very foundations of a free society and livable world."
Demands to regulate AI are mounting. Last month, Geoffrey Hinton, considered the "godfather of artificial intelligence," compared the quickly advancing technology's potential impacts to "the Industrial Revolution, or electricity, or maybe the wheel."
Asked by
CBS News' Brook Silva-Braga about the possibility of the technology "wiping out humanity," Hinton warned that "it's not inconceivable."
That frightening potential doesn't necessarily lie with existing AI tools such as ChatGPT, but rather with what is called "artificial general intelligence" (AGI), through which computers develop and act on their own ideas.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI," Hinton told CBS News. "Now I think it may be 20 years or less." Eventually, Hinton admitted that he wouldn't rule out the possibility of AGI arriving within five years—a major departure from a few years ago when he "would have said, 'No way.'"
"We have to think hard about how to control that," said Hinton. Asked by Silva-Braga if that's possible, Hinton said, "We don't know, we haven't been there yet, but we can try."
The AI pioneer is far from alone. In February, OpenAI CEO Sam Altman wrote in a company blog post: "The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world."
More than 26,000 people have signed a recently published open letter that calls for a six-month moratorium on training AI systems beyond the level of OpenAI's latest chatbot, GPT-4, although Altman is not among them.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," says the letter.
While AGI may still be a few years away, Public Citizen's new report makes clear that existing AI tools—including chatbots spewing lies, face-swapping apps generating fake videos, and cloned voices committing fraud—are already causing or threatening to cause serious harm, including intensifying inequality, undermining democracy, displacing workers, preying on consumers, and exacerbating the climate crisis.
These threats "are all very real and highly likely to occur if corporations are permitted to deploy generative AI without enforceable guardrails," Claypool and Hunt wrote. "But there is nothing inevitable about them."
They continued:
Government regulation can block companies from deploying the technologies too quickly (or block them altogether if they prove unsafe). It can set standards to protect people from the risks. It can impose duties on companies using generative AI to avoid identifiable harms, respect the interests of communities and creators, pretest their technologies, take responsibility, and accept liability if things go wrong. It can demand equity be built into the technologies. It can insist that if generative AI does, in fact, increase productivity and displace workers, or that the economic benefits be shared with those harmed and not be concentrated among a small circle of companies, executives, and investors.
Amid "growing regulatory interest" in an AI "accountability mechanism," the Biden administration announced last week that it is seeking public input on measures that could be implemented to ensure that "AI systems are legal, effective, ethical, safe, and otherwise trustworthy."
According toAxios, Senate Majority Leader Chuck Schumer (D-N.Y.) is "taking early steps toward legislation to regulate artificial intelligence technology."
In the words of Claypool and Hunt: "We need strong safeguards and government regulation—and we need them in place before corporations disseminate AI technology widely. Until then, we need a pause."
"Until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause."
So says a report on the dangers of artificial intelligence (AI) published Tuesday by Public Citizen. Titled Sorry in Advance! Rapid Rush to Deploy Generative AI Risks a Wide Array of Automated Harms, the analysis by researchers Rick Claypool and Cheyenne Hunt aims to "reframe the conversation around generative AI to ensure that the public and policymakers have a say in how these new technologies might upend our lives."
Following the November release of OpenAI's ChatGPT, generative AI tools have been receiving "a huge amount of buzz—especially among the Big Tech corporations best positioned to profit from them," the report notes. "The most enthusiastic boosters say AI will change the world in ways that make everyone rich—and some detractors say it could kill us all. Separate from frightening threats that may materialize as the technology evolves are real-world harms the rush to release and monetize these tools can cause—and, in many cases, is already causing."
Claypool and Hunt categorized these harms into "five broad areas of concern":
In a statement, Public Citizen warned that "businesses are deploying potentially dangerous AI tools faster than their harms can be understood or mitigated."
"History offers no reason to believe that corporations can self-regulate away the known risks—especially since many of these risks are as much a part of generative AI as they are of corporate greed," the statement continues. "Businesses rushing to introduce these new technologies are gambling with peoples' lives and livelihoods, and arguably with the very foundations of a free society and livable world."
On Thursday, April 27, Public Citizen is hosting a hybrid in-person/Zoom conference in Washington, D.C., during which U.S. Rep. Ted Lieu (D-Calif.) and 10 other panelists will discuss the threats posed by AI and how to rein in the rapidly growing yet virtually unregulated industry. People interested in participating must register by this Friday.
"Businesses rushing to introduce these new technologies are gambling with peoples' lives and livelihoods, and arguably with the very foundations of a free society and livable world."
Demands to regulate AI are mounting. Last month, Geoffrey Hinton, considered the "godfather of artificial intelligence," compared the quickly advancing technology's potential impacts to "the Industrial Revolution, or electricity, or maybe the wheel."
Asked by
CBS News' Brook Silva-Braga about the possibility of the technology "wiping out humanity," Hinton warned that "it's not inconceivable."
That frightening potential doesn't necessarily lie with existing AI tools such as ChatGPT, but rather with what is called "artificial general intelligence" (AGI), through which computers develop and act on their own ideas.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI," Hinton told CBS News. "Now I think it may be 20 years or less." Eventually, Hinton admitted that he wouldn't rule out the possibility of AGI arriving within five years—a major departure from a few years ago when he "would have said, 'No way.'"
"We have to think hard about how to control that," said Hinton. Asked by Silva-Braga if that's possible, Hinton said, "We don't know, we haven't been there yet, but we can try."
The AI pioneer is far from alone. In February, OpenAI CEO Sam Altman wrote in a company blog post: "The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world."
More than 26,000 people have signed a recently published open letter that calls for a six-month moratorium on training AI systems beyond the level of OpenAI's latest chatbot, GPT-4, although Altman is not among them.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," says the letter.
While AGI may still be a few years away, Public Citizen's new report makes clear that existing AI tools—including chatbots spewing lies, face-swapping apps generating fake videos, and cloned voices committing fraud—are already causing or threatening to cause serious harm, including intensifying inequality, undermining democracy, displacing workers, preying on consumers, and exacerbating the climate crisis.
These threats "are all very real and highly likely to occur if corporations are permitted to deploy generative AI without enforceable guardrails," Claypool and Hunt wrote. "But there is nothing inevitable about them."
They continued:
Government regulation can block companies from deploying the technologies too quickly (or block them altogether if they prove unsafe). It can set standards to protect people from the risks. It can impose duties on companies using generative AI to avoid identifiable harms, respect the interests of communities and creators, pretest their technologies, take responsibility, and accept liability if things go wrong. It can demand equity be built into the technologies. It can insist that if generative AI does, in fact, increase productivity and displace workers, or that the economic benefits be shared with those harmed and not be concentrated among a small circle of companies, executives, and investors.
Amid "growing regulatory interest" in an AI "accountability mechanism," the Biden administration announced last week that it is seeking public input on measures that could be implemented to ensure that "AI systems are legal, effective, ethical, safe, and otherwise trustworthy."
According toAxios, Senate Majority Leader Chuck Schumer (D-N.Y.) is "taking early steps toward legislation to regulate artificial intelligence technology."
In the words of Claypool and Hunt: "We need strong safeguards and government regulation—and we need them in place before corporations disseminate AI technology widely. Until then, we need a pause."