SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"The technology will create legions of opportunities to deceive and defraud voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire," warned Public Citizen president Robert Weissman.
The head of the consumer advocacy group Public Citizen on Tuesday called on the two major U.S. political parties and their presidential candidates to pledge not to use generative artificial intelligence or deepfake technology "to mislead or defraud" voters during the 2024 electoral cycle.
Noting that "political operatives now have the means to produce ads with highly realistic computer-generated images, audio, and video of opponents that appear genuine, but are completely fabricated," Public Citizen warned of the prospect of an "October Surprise" deepfake video that could go viral "with no ability for voters to determine that it's fake, no time for a candidate to deny it, and no way to demonstrate convincingly that it's fake."
The watchdog offered recent examples of deepfake creations, including an audio clip of President Joe Biden discussing the 2011 film We Bought a Zoo.
"Generative AI now poses a significant threat to truth and democracy as we know it."
"Generative AI now poses a significant threat to truth and democracy as we know it," Public Citizen president Robert Weissman said in a statement. "The technology will create legions of opportunities to deceive and defraud voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire."
As Thor Benson recently noted in Wired:
There are plenty of ways to generate AI images from text, such as DALL-E, Midjourney, and Stable Diffusion. It's easy to generate a clone of someone's voice with an AI program like the one offered by ElevenLabs. Convincing deepfake videos are still difficult to produce, but... that might not be the case within a year or so.
"I don't think there's a website where you can say, 'Create me a video of Joe Biden saying X.' That doesn't exist, but it will," Hany Farid, a professor at the University of California, Berkeley's School of Information, told Wired. "It's just a matter of time. People are already working on text-to-video."
In a petition sent Tuesday to Federal Election Commission acting General Counsel Lisa J. Stevenson, Weissman and Public Citizen government affairs lobbyist Craig Holman asked the agency to "clarify when and how 5 USC §30124 ('Fraudulent misrepresentation of campaign authority') applies to deliberately deceptive AI campaign ads."
"Federal law proscribes candidates for federal office or their employees or agents from fraudulently misrepresenting themselves as speaking or acting for or on behalf of another candidate or political party on a matter damaging to the other candidate or party," Weissman and Holman noted.
"In view of the novelty of deepfake technology and the speed with which it is improving, Public Citizen encourages the commission to specify in regulation or guidance that if candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false AI-generated content in campaign ads, that the restrictions and penalties of 52 USC §30124 are applicable," the pair added.
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
The head of the consumer advocacy group Public Citizen on Tuesday called on the two major U.S. political parties and their presidential candidates to pledge not to use generative artificial intelligence or deepfake technology "to mislead or defraud" voters during the 2024 electoral cycle.
Noting that "political operatives now have the means to produce ads with highly realistic computer-generated images, audio, and video of opponents that appear genuine, but are completely fabricated," Public Citizen warned of the prospect of an "October Surprise" deepfake video that could go viral "with no ability for voters to determine that it's fake, no time for a candidate to deny it, and no way to demonstrate convincingly that it's fake."
The watchdog offered recent examples of deepfake creations, including an audio clip of President Joe Biden discussing the 2011 film We Bought a Zoo.
"Generative AI now poses a significant threat to truth and democracy as we know it."
"Generative AI now poses a significant threat to truth and democracy as we know it," Public Citizen president Robert Weissman said in a statement. "The technology will create legions of opportunities to deceive and defraud voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire."
As Thor Benson recently noted in Wired:
There are plenty of ways to generate AI images from text, such as DALL-E, Midjourney, and Stable Diffusion. It's easy to generate a clone of someone's voice with an AI program like the one offered by ElevenLabs. Convincing deepfake videos are still difficult to produce, but... that might not be the case within a year or so.
"I don't think there's a website where you can say, 'Create me a video of Joe Biden saying X.' That doesn't exist, but it will," Hany Farid, a professor at the University of California, Berkeley's School of Information, told Wired. "It's just a matter of time. People are already working on text-to-video."
In a petition sent Tuesday to Federal Election Commission acting General Counsel Lisa J. Stevenson, Weissman and Public Citizen government affairs lobbyist Craig Holman asked the agency to "clarify when and how 5 USC §30124 ('Fraudulent misrepresentation of campaign authority') applies to deliberately deceptive AI campaign ads."
"Federal law proscribes candidates for federal office or their employees or agents from fraudulently misrepresenting themselves as speaking or acting for or on behalf of another candidate or political party on a matter damaging to the other candidate or party," Weissman and Holman noted.
"In view of the novelty of deepfake technology and the speed with which it is improving, Public Citizen encourages the commission to specify in regulation or guidance that if candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false AI-generated content in campaign ads, that the restrictions and penalties of 52 USC §30124 are applicable," the pair added.
The head of the consumer advocacy group Public Citizen on Tuesday called on the two major U.S. political parties and their presidential candidates to pledge not to use generative artificial intelligence or deepfake technology "to mislead or defraud" voters during the 2024 electoral cycle.
Noting that "political operatives now have the means to produce ads with highly realistic computer-generated images, audio, and video of opponents that appear genuine, but are completely fabricated," Public Citizen warned of the prospect of an "October Surprise" deepfake video that could go viral "with no ability for voters to determine that it's fake, no time for a candidate to deny it, and no way to demonstrate convincingly that it's fake."
The watchdog offered recent examples of deepfake creations, including an audio clip of President Joe Biden discussing the 2011 film We Bought a Zoo.
"Generative AI now poses a significant threat to truth and democracy as we know it."
"Generative AI now poses a significant threat to truth and democracy as we know it," Public Citizen president Robert Weissman said in a statement. "The technology will create legions of opportunities to deceive and defraud voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire."
As Thor Benson recently noted in Wired:
There are plenty of ways to generate AI images from text, such as DALL-E, Midjourney, and Stable Diffusion. It's easy to generate a clone of someone's voice with an AI program like the one offered by ElevenLabs. Convincing deepfake videos are still difficult to produce, but... that might not be the case within a year or so.
"I don't think there's a website where you can say, 'Create me a video of Joe Biden saying X.' That doesn't exist, but it will," Hany Farid, a professor at the University of California, Berkeley's School of Information, told Wired. "It's just a matter of time. People are already working on text-to-video."
In a petition sent Tuesday to Federal Election Commission acting General Counsel Lisa J. Stevenson, Weissman and Public Citizen government affairs lobbyist Craig Holman asked the agency to "clarify when and how 5 USC §30124 ('Fraudulent misrepresentation of campaign authority') applies to deliberately deceptive AI campaign ads."
"Federal law proscribes candidates for federal office or their employees or agents from fraudulently misrepresenting themselves as speaking or acting for or on behalf of another candidate or political party on a matter damaging to the other candidate or party," Weissman and Holman noted.
"In view of the novelty of deepfake technology and the speed with which it is improving, Public Citizen encourages the commission to specify in regulation or guidance that if candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false AI-generated content in campaign ads, that the restrictions and penalties of 52 USC §30124 are applicable," the pair added.