SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"It's time for the FCC to protect voters from deepfakes," said one advocate.
A week after the Federal Elections Commission announced it would not take action to regulate artificial intelligence-generated "deepfakes" in political ads, more than 40 civil society groups on Thursday called on the Federal Communications Commission to step in to ensure U.S. voters will be informed about fake content used by campaigns as they prepare to go to the polls.
The groups, including Public Citizen, the AFL-CIO, Access Now, and the Campaign Legal Center, backed a proposal by the FCC to require on-air and written disclosures when there is AI-generated content in political ads.
"It's time for the FCC to protect voters from deepfakes!" said Willmary Escoto, policy counsel for Access Now.
Unveiled in May by FCC Chair Jessica Rosenworcel, the FCC's proposal would apply the disclosure rules to ads pertaining to candidates and issues and push for a "specific definition of AI-generated content."
"These rules are essential to safeguard the integrity of our democratic processes and ensure that voters are fully informed of the origins of political advertisements."
The civil society groups expressed their "strong support" for rules requiring "transparency in the use of AI-generated content in political advertisements on TV and radio, especially when the AI-generated content falsely depicts a candidate or persons saying or doing something that they never did with the intent to cause harm or deceive voters (known as 'deepfakes')."
"These rules are essential to safeguard the integrity of our democratic processes and ensure that voters are fully informed of the origins of political advertisements," wrote the groups.
Public Citizen condemned the Federal Election Commission last week when its Republican chair, Sean Cooksey, said the agency should "study how AI is actually used on the ground before considering any new rules."
The groups on Thursday said evidence already "abounds of the significant and deceptive impact that AI-generated content can have," with X owner Elon Musk recently posting a deepfake video that showed a manipulated image of Democratic presidential candidate and Vice President Kamala Harris, making it seem like she was saying she was the "ultimate diversity hire."
"The proposed disclosure requirements are a natural and common-sense extension of the FCC's existing mandates to ensure transparency in broadcasting in general and in political advertising on radio and TV in particular," said the groups.
They also commended the FCC's leadership in addressing the "critical issue" of deepfakes.
Political revenge. Mass deportations. Project 2025. Unfathomable corruption. Attacks on Social Security, Medicare, and Medicaid. Pardons for insurrectionists. An all-out assault on democracy. Republicans in Congress are scrambling to give Trump broad new powers to strip the tax-exempt status of any nonprofit he doesn’t like by declaring it a “terrorist-supporting organization.” Trump has already begun filing lawsuits against news outlets that criticize him. At Common Dreams, we won’t back down, but we must get ready for whatever Trump and his thugs throw at us. Our Year-End campaign is our most important fundraiser of the year. As a people-powered nonprofit news outlet, we cover issues the corporate media never will, but we can only continue with our readers’ support. By donating today, please help us fight the dangers of a second Trump presidency. |
A week after the Federal Elections Commission announced it would not take action to regulate artificial intelligence-generated "deepfakes" in political ads, more than 40 civil society groups on Thursday called on the Federal Communications Commission to step in to ensure U.S. voters will be informed about fake content used by campaigns as they prepare to go to the polls.
The groups, including Public Citizen, the AFL-CIO, Access Now, and the Campaign Legal Center, backed a proposal by the FCC to require on-air and written disclosures when there is AI-generated content in political ads.
"It's time for the FCC to protect voters from deepfakes!" said Willmary Escoto, policy counsel for Access Now.
Unveiled in May by FCC Chair Jessica Rosenworcel, the FCC's proposal would apply the disclosure rules to ads pertaining to candidates and issues and push for a "specific definition of AI-generated content."
"These rules are essential to safeguard the integrity of our democratic processes and ensure that voters are fully informed of the origins of political advertisements."
The civil society groups expressed their "strong support" for rules requiring "transparency in the use of AI-generated content in political advertisements on TV and radio, especially when the AI-generated content falsely depicts a candidate or persons saying or doing something that they never did with the intent to cause harm or deceive voters (known as 'deepfakes')."
"These rules are essential to safeguard the integrity of our democratic processes and ensure that voters are fully informed of the origins of political advertisements," wrote the groups.
Public Citizen condemned the Federal Election Commission last week when its Republican chair, Sean Cooksey, said the agency should "study how AI is actually used on the ground before considering any new rules."
The groups on Thursday said evidence already "abounds of the significant and deceptive impact that AI-generated content can have," with X owner Elon Musk recently posting a deepfake video that showed a manipulated image of Democratic presidential candidate and Vice President Kamala Harris, making it seem like she was saying she was the "ultimate diversity hire."
"The proposed disclosure requirements are a natural and common-sense extension of the FCC's existing mandates to ensure transparency in broadcasting in general and in political advertising on radio and TV in particular," said the groups.
They also commended the FCC's leadership in addressing the "critical issue" of deepfakes.
A week after the Federal Elections Commission announced it would not take action to regulate artificial intelligence-generated "deepfakes" in political ads, more than 40 civil society groups on Thursday called on the Federal Communications Commission to step in to ensure U.S. voters will be informed about fake content used by campaigns as they prepare to go to the polls.
The groups, including Public Citizen, the AFL-CIO, Access Now, and the Campaign Legal Center, backed a proposal by the FCC to require on-air and written disclosures when there is AI-generated content in political ads.
"It's time for the FCC to protect voters from deepfakes!" said Willmary Escoto, policy counsel for Access Now.
Unveiled in May by FCC Chair Jessica Rosenworcel, the FCC's proposal would apply the disclosure rules to ads pertaining to candidates and issues and push for a "specific definition of AI-generated content."
"These rules are essential to safeguard the integrity of our democratic processes and ensure that voters are fully informed of the origins of political advertisements."
The civil society groups expressed their "strong support" for rules requiring "transparency in the use of AI-generated content in political advertisements on TV and radio, especially when the AI-generated content falsely depicts a candidate or persons saying or doing something that they never did with the intent to cause harm or deceive voters (known as 'deepfakes')."
"These rules are essential to safeguard the integrity of our democratic processes and ensure that voters are fully informed of the origins of political advertisements," wrote the groups.
Public Citizen condemned the Federal Election Commission last week when its Republican chair, Sean Cooksey, said the agency should "study how AI is actually used on the ground before considering any new rules."
The groups on Thursday said evidence already "abounds of the significant and deceptive impact that AI-generated content can have," with X owner Elon Musk recently posting a deepfake video that showed a manipulated image of Democratic presidential candidate and Vice President Kamala Harris, making it seem like she was saying she was the "ultimate diversity hire."
"The proposed disclosure requirements are a natural and common-sense extension of the FCC's existing mandates to ensure transparency in broadcasting in general and in political advertising on radio and TV in particular," said the groups.
They also commended the FCC's leadership in addressing the "critical issue" of deepfakes.