SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"An unregulated and undisclosed Wild West of AI-generated campaign ads will further erode the public's confidence in the integrity of the electoral process," warned one expert at Public Citizen.
The watchdog group Public Citizen on Wednesday urged the Federal Election Commission to formally affirm that so-called "deepfakes" in U.S. political campaign communications are illegal under an existing law against fraudulent misrepresentation.
Public Citizen's new submission to the FEC comes after the agency in August announced it would advance the group's petition for rulemaking on the matter and as the related 60-day public comment period is set to end next Monday.
"Extraordinary advances in artificial intelligence (AI) now provide political operatives with the means to produce campaign ads and other communications with computer-generated fake images, audio, or video of candidates that appear real-life, fraudulently misrepresenting what candidates say or do," explains Public Citizen's comment to the FEC.
"Generative artificial intelligence and deepfake technology—a type of computerized technology used to create fake but convincing images, audio, and video hoaxes—is evolving very rapidly," the comment continues. "Every day, it seems, new and increasingly convincing deepfake audio and video clips are disseminated. And the pace is very likely to pick up as the 2024 presidential election nears."
"The FEC must use its authority to ban deepfakes or risk being complicit with an AI-driven wave of fraudulent misinformation and the destruction of basic norms of truth and falsity."
Pointing to examples involving Democratic former Chicago mayoral candidate Paul Vallas and Republican Florida Gov. Ron DeSantis, who is seeking the GOP's 2024 presidential nomination, Public Citizen stressed that "campaigns are already running AI-generated ads that look and sound like actual candidates and events, but in fact are entirely fabricated. These ads look and sound so real that it is becoming exceedingly difficult to discern fact from fiction."
The FEC "is considering rulemaking to clarify whether and how deepfakes in campaign communications are covered under the law against 'fraudulent misrepresentation' (52 USC § 30124)," the group noted. "Because of the limitations and narrow reach of the law, such rulemaking should not be viewed as a panacea to the problem of deliberately deceptive AI-content in campaign messages, but it would be a very important first step."
"Deceptive deepfakes fit squarely into the parameters of 52 USC § 30124," Public Citizen argued. "Specifically, by falsely putting words into another candidate's mouth, or showing the candidate taking action they did not, the deceptive deepfaker fraudulently speaks or act 'for' that candidate in a way deliberately intended to damage him or her. This is precisely what the statute aims to proscribe."
The FEC's consideration of deepfakes policy comes as the 2024 fight for the White House is already well underway, with Democratic President Joe Biden campaigning for reelection and Republican former President Donald Trump leading his party's crowded field, despite four criminal cases, two of which stem from him trying to overturn his 2020 loss and inciting the January 6, 2021 insurrection—which, as some experts argue, constitutionally disqualifies him from holding office again.
Over the past few years, Trump's "Big Lie," the right-wing assault of the U.S. Capitol, and the GOP's ongoing efforts to roll back voting rights and impose rigged political districts have all boosted distrust in democracy—and Craig Holman, government affairs lobbyist for Public Citizen and co-author of the FEC comment, warned that deepfakes don't help.
"An unregulated and undisclosed Wild West of AI-generated campaign ads will further erode the public's confidence in the integrity of the electoral process," said Holman. "If voters cannot discern fact from fiction in campaign messages, they will increasingly doubt the value of casting a ballot—or the value of ballots cast by others."
In other words, as Public Citizen president and comment co-author Robert Weissman put it, "deepfakes pose a significant threat to democracy as we know it."
"The FEC must use its authority to ban deepfakes," he asserted, "or risk being complicit with an AI-driven wave of fraudulent misinformation and the destruction of basic norms of truth and falsity."
Last week, U.S. Rep. Adam Schiff (D-Calif.) and Sens. Ben Ray Luján (D-N.M.) and Amy Klobuchar (D-Minn.) also weighed in, asking the FEC to "explicitly clarify that prohibitions set forth in statute (52 U.S.C. § 30124) apply to deliberately deceptive content in campaign advertisements created by generative AI" and "require disclaimers on campaign advertisements that include content created by generative AI."
Growing calls for the FEC to crack down on such content are part of broader demands from advocacy groups, industry experts, and lawmakers for clear restrictions on artificial intelligence—though, as Roll Call reported Wednesday, "drawing the line on AI-based deepfakes proves tricky for Congress," despite a general consensus that "deceptive ads pose risks to the democratic process."
U.S. Sen. Ed Markey (D-Mass.) and Rep. Pramila Jayapal (D-Wash.) led a Wednesday letter urging Biden to "implement vital near-term safeguards on the use of AI by incorporating the AI Bill of Rights into the forthcoming AI executive order, or subsequent executive orders."
The lawmakers' message to the president echoes a September letter to Biden and Vice President Kamala Harris from over 60 civil society groups. It also follows a July deal between the White House and seven leading AI companies to impose voluntary safeguards—which experts called a "great step" but "not enough."
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
The watchdog group Public Citizen on Wednesday urged the Federal Election Commission to formally affirm that so-called "deepfakes" in U.S. political campaign communications are illegal under an existing law against fraudulent misrepresentation.
Public Citizen's new submission to the FEC comes after the agency in August announced it would advance the group's petition for rulemaking on the matter and as the related 60-day public comment period is set to end next Monday.
"Extraordinary advances in artificial intelligence (AI) now provide political operatives with the means to produce campaign ads and other communications with computer-generated fake images, audio, or video of candidates that appear real-life, fraudulently misrepresenting what candidates say or do," explains Public Citizen's comment to the FEC.
"Generative artificial intelligence and deepfake technology—a type of computerized technology used to create fake but convincing images, audio, and video hoaxes—is evolving very rapidly," the comment continues. "Every day, it seems, new and increasingly convincing deepfake audio and video clips are disseminated. And the pace is very likely to pick up as the 2024 presidential election nears."
"The FEC must use its authority to ban deepfakes or risk being complicit with an AI-driven wave of fraudulent misinformation and the destruction of basic norms of truth and falsity."
Pointing to examples involving Democratic former Chicago mayoral candidate Paul Vallas and Republican Florida Gov. Ron DeSantis, who is seeking the GOP's 2024 presidential nomination, Public Citizen stressed that "campaigns are already running AI-generated ads that look and sound like actual candidates and events, but in fact are entirely fabricated. These ads look and sound so real that it is becoming exceedingly difficult to discern fact from fiction."
The FEC "is considering rulemaking to clarify whether and how deepfakes in campaign communications are covered under the law against 'fraudulent misrepresentation' (52 USC § 30124)," the group noted. "Because of the limitations and narrow reach of the law, such rulemaking should not be viewed as a panacea to the problem of deliberately deceptive AI-content in campaign messages, but it would be a very important first step."
"Deceptive deepfakes fit squarely into the parameters of 52 USC § 30124," Public Citizen argued. "Specifically, by falsely putting words into another candidate's mouth, or showing the candidate taking action they did not, the deceptive deepfaker fraudulently speaks or act 'for' that candidate in a way deliberately intended to damage him or her. This is precisely what the statute aims to proscribe."
The FEC's consideration of deepfakes policy comes as the 2024 fight for the White House is already well underway, with Democratic President Joe Biden campaigning for reelection and Republican former President Donald Trump leading his party's crowded field, despite four criminal cases, two of which stem from him trying to overturn his 2020 loss and inciting the January 6, 2021 insurrection—which, as some experts argue, constitutionally disqualifies him from holding office again.
Over the past few years, Trump's "Big Lie," the right-wing assault of the U.S. Capitol, and the GOP's ongoing efforts to roll back voting rights and impose rigged political districts have all boosted distrust in democracy—and Craig Holman, government affairs lobbyist for Public Citizen and co-author of the FEC comment, warned that deepfakes don't help.
"An unregulated and undisclosed Wild West of AI-generated campaign ads will further erode the public's confidence in the integrity of the electoral process," said Holman. "If voters cannot discern fact from fiction in campaign messages, they will increasingly doubt the value of casting a ballot—or the value of ballots cast by others."
In other words, as Public Citizen president and comment co-author Robert Weissman put it, "deepfakes pose a significant threat to democracy as we know it."
"The FEC must use its authority to ban deepfakes," he asserted, "or risk being complicit with an AI-driven wave of fraudulent misinformation and the destruction of basic norms of truth and falsity."
Last week, U.S. Rep. Adam Schiff (D-Calif.) and Sens. Ben Ray Luján (D-N.M.) and Amy Klobuchar (D-Minn.) also weighed in, asking the FEC to "explicitly clarify that prohibitions set forth in statute (52 U.S.C. § 30124) apply to deliberately deceptive content in campaign advertisements created by generative AI" and "require disclaimers on campaign advertisements that include content created by generative AI."
Growing calls for the FEC to crack down on such content are part of broader demands from advocacy groups, industry experts, and lawmakers for clear restrictions on artificial intelligence—though, as Roll Call reported Wednesday, "drawing the line on AI-based deepfakes proves tricky for Congress," despite a general consensus that "deceptive ads pose risks to the democratic process."
U.S. Sen. Ed Markey (D-Mass.) and Rep. Pramila Jayapal (D-Wash.) led a Wednesday letter urging Biden to "implement vital near-term safeguards on the use of AI by incorporating the AI Bill of Rights into the forthcoming AI executive order, or subsequent executive orders."
The lawmakers' message to the president echoes a September letter to Biden and Vice President Kamala Harris from over 60 civil society groups. It also follows a July deal between the White House and seven leading AI companies to impose voluntary safeguards—which experts called a "great step" but "not enough."
The watchdog group Public Citizen on Wednesday urged the Federal Election Commission to formally affirm that so-called "deepfakes" in U.S. political campaign communications are illegal under an existing law against fraudulent misrepresentation.
Public Citizen's new submission to the FEC comes after the agency in August announced it would advance the group's petition for rulemaking on the matter and as the related 60-day public comment period is set to end next Monday.
"Extraordinary advances in artificial intelligence (AI) now provide political operatives with the means to produce campaign ads and other communications with computer-generated fake images, audio, or video of candidates that appear real-life, fraudulently misrepresenting what candidates say or do," explains Public Citizen's comment to the FEC.
"Generative artificial intelligence and deepfake technology—a type of computerized technology used to create fake but convincing images, audio, and video hoaxes—is evolving very rapidly," the comment continues. "Every day, it seems, new and increasingly convincing deepfake audio and video clips are disseminated. And the pace is very likely to pick up as the 2024 presidential election nears."
"The FEC must use its authority to ban deepfakes or risk being complicit with an AI-driven wave of fraudulent misinformation and the destruction of basic norms of truth and falsity."
Pointing to examples involving Democratic former Chicago mayoral candidate Paul Vallas and Republican Florida Gov. Ron DeSantis, who is seeking the GOP's 2024 presidential nomination, Public Citizen stressed that "campaigns are already running AI-generated ads that look and sound like actual candidates and events, but in fact are entirely fabricated. These ads look and sound so real that it is becoming exceedingly difficult to discern fact from fiction."
The FEC "is considering rulemaking to clarify whether and how deepfakes in campaign communications are covered under the law against 'fraudulent misrepresentation' (52 USC § 30124)," the group noted. "Because of the limitations and narrow reach of the law, such rulemaking should not be viewed as a panacea to the problem of deliberately deceptive AI-content in campaign messages, but it would be a very important first step."
"Deceptive deepfakes fit squarely into the parameters of 52 USC § 30124," Public Citizen argued. "Specifically, by falsely putting words into another candidate's mouth, or showing the candidate taking action they did not, the deceptive deepfaker fraudulently speaks or act 'for' that candidate in a way deliberately intended to damage him or her. This is precisely what the statute aims to proscribe."
The FEC's consideration of deepfakes policy comes as the 2024 fight for the White House is already well underway, with Democratic President Joe Biden campaigning for reelection and Republican former President Donald Trump leading his party's crowded field, despite four criminal cases, two of which stem from him trying to overturn his 2020 loss and inciting the January 6, 2021 insurrection—which, as some experts argue, constitutionally disqualifies him from holding office again.
Over the past few years, Trump's "Big Lie," the right-wing assault of the U.S. Capitol, and the GOP's ongoing efforts to roll back voting rights and impose rigged political districts have all boosted distrust in democracy—and Craig Holman, government affairs lobbyist for Public Citizen and co-author of the FEC comment, warned that deepfakes don't help.
"An unregulated and undisclosed Wild West of AI-generated campaign ads will further erode the public's confidence in the integrity of the electoral process," said Holman. "If voters cannot discern fact from fiction in campaign messages, they will increasingly doubt the value of casting a ballot—or the value of ballots cast by others."
In other words, as Public Citizen president and comment co-author Robert Weissman put it, "deepfakes pose a significant threat to democracy as we know it."
"The FEC must use its authority to ban deepfakes," he asserted, "or risk being complicit with an AI-driven wave of fraudulent misinformation and the destruction of basic norms of truth and falsity."
Last week, U.S. Rep. Adam Schiff (D-Calif.) and Sens. Ben Ray Luján (D-N.M.) and Amy Klobuchar (D-Minn.) also weighed in, asking the FEC to "explicitly clarify that prohibitions set forth in statute (52 U.S.C. § 30124) apply to deliberately deceptive content in campaign advertisements created by generative AI" and "require disclaimers on campaign advertisements that include content created by generative AI."
Growing calls for the FEC to crack down on such content are part of broader demands from advocacy groups, industry experts, and lawmakers for clear restrictions on artificial intelligence—though, as Roll Call reported Wednesday, "drawing the line on AI-based deepfakes proves tricky for Congress," despite a general consensus that "deceptive ads pose risks to the democratic process."
U.S. Sen. Ed Markey (D-Mass.) and Rep. Pramila Jayapal (D-Wash.) led a Wednesday letter urging Biden to "implement vital near-term safeguards on the use of AI by incorporating the AI Bill of Rights into the forthcoming AI executive order, or subsequent executive orders."
The lawmakers' message to the president echoes a September letter to Biden and Vice President Kamala Harris from over 60 civil society groups. It also follows a July deal between the White House and seven leading AI companies to impose voluntary safeguards—which experts called a "great step" but "not enough."