SUBSCRIBE TO OUR FREE NEWSLETTER

SUBSCRIBE TO OUR FREE NEWSLETTER

Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

* indicates required
5
#000000
#FFFFFF
AI technology is seen changing the appearance of a woman.

AI technology is seen changing the appearance of a woman.

(Photo: FotografieLink via iStock/Getty Images Plus)

Civil Society Groups Back FCC Effort to Confront Deceptive Deepfakes

"It's time for the FCC to protect voters from deepfakes," said one advocate.

A week after the Federal Elections Commission announced it would not take action to regulate artificial intelligence-generated "deepfakes" in political ads, more than 40 civil society groups on Thursday called on the Federal Communications Commission to step in to ensure U.S. voters will be informed about fake content used by campaigns as they prepare to go to the polls.

The groups, including Public Citizen, the AFL-CIO, Access Now, and the Campaign Legal Center, backed a proposal by the FCC to require on-air and written disclosures when there is AI-generated content in political ads.

"It's time for the FCC to protect voters from deepfakes!" said Willmary Escoto, policy counsel for Access Now.

Unveiled in May by FCC Chair Jessica Rosenworcel, the FCC's proposal would apply the disclosure rules to ads pertaining to candidates and issues and push for a "specific definition of AI-generated content."

"These rules are essential to safeguard the integrity of our democratic processes and ensure that voters are fully informed of the origins of political advertisements."

The civil society groups expressed their "strong support" for rules requiring "transparency in the use of AI-generated content in political advertisements on TV and radio, especially when the AI-generated content falsely depicts a candidate or persons saying or doing something that they never did with the intent to cause harm or deceive voters (known as 'deepfakes')."

"These rules are essential to safeguard the integrity of our democratic processes and ensure that voters are fully informed of the origins of political advertisements," wrote the groups.

Public Citizen condemned the Federal Election Commission last week when its Republican chair, Sean Cooksey, said the agency should "study how AI is actually used on the ground before considering any new rules."

The groups on Thursday said evidence already "abounds of the significant and deceptive impact that AI-generated content can have," with X owner Elon Musk recently posting a deepfake video that showed a manipulated image of Democratic presidential candidate and Vice President Kamala Harris, making it seem like she was saying she was the "ultimate diversity hire."

"The proposed disclosure requirements are a natural and common-sense extension of the FCC's existing mandates to ensure transparency in broadcasting in general and in political advertising on radio and TV in particular," said the groups.

They also commended the FCC's leadership in addressing the "critical issue" of deepfakes.

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.