Advocates responded with deep skepticism on Friday after 20 major tech companies—including Microsoft, Google, Meta, and OpenAI—signed an
accord pledging to combat the proliferation of AI-generated deepfakes during elections across the globe this year.
"The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public
in ways that jeopardize the integrity of electoral processes," the accord reads, defining such content as "convincing AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote."
The new pact sets out "a voluntary framework of principles and actions" aimed at advancing several goals, including detection of deceptive AI content, prevention of the spread of deepfakes, and public awareness of potentially manipulated audio, video, and images.
Lisa Gilbert, executive vice president of the consumer advocacy group Public Citizen, said Friday that "we are happy to see these companies taking steps to voluntarily rein in what are likely to be some of the worst abuses."
But a commitment to "self-police" is "not enough," Gilbert argued.
"The AI companies must commit to hold back technology—especially text-to-video—that presents major election risks until there are substantial and adequate safeguards in place to help us avert many potential problems," she said. "All of the companies should also affirmatively support pending laws, both federal and state, as well as needed regulations that will rein in political deepfakes, and not introduce dangerous new technologies until those legal protections are in place."
"You can't simply tech around this problem. We need robust content moderation that involves human review, labeling, and enforcement."
The digital watchdog group Free Press also argued that the tech giants must do much more than what's outlined in the new accord, which the organization characterized as "more empty promises."
"Voluntary promises like the one announced today simply aren't good enough to meet the global challenges facing democracy," said Nora Benavidez, senior counsel and director of digital justice and civil rights at Free Press. "Every election cycle, tech companies pledge to a vague set of democratic standards and then fail to fully deliver on these promises. To address the real harms that AI poses in a busy election year, these companies must do more than develop technology to combat technology."
"You can't simply tech around this problem," Benavidez continued. "We need robust content moderation that involves human review, labeling, and enforcement."
The pact, titled "A Tech Accord to Combat Deceptive Use of AI in 2024 Elections," was made public as the U.S. Federal Election Commission (FEC) continued to face backlash for failing to move swiftly on deepfake regulations ahead of the 2024 elections.
Last month, FEC Chair Sean Cooksey, a Republican, said he expects the agency to "resolve the AI rulemaking by early summer"—raising questions over whether any new regulations will be in place in time for November's contests, including the high-stakes presidential race.
Public Citizen president Robert Weissman warned Friday that the FEC "is asleep at the wheel" as other federal agencies and states across the U.S. take steps to confront the threat of deepfakes ahead of the 2024 elections.
"C'mon FEC. An electoral nightmare is racing toward us. It confers no partisan advantage," said Weissman. "The solutions are straightforward and have bipartisan support—as states across the nation are demonstrating. Wake up from your trance. Do your job and protect our elections and democracy. Ban political deepfakes without delay."