SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
While welcoming the rule, one advocate said it is "not enough to safeguard citizens and our elections."
Just over two weeks after New Hampshire voters were inundated with artificial intelligence-generation robocalls featuring U.S. President Joe Biden's fake voice telling them not to vote in their state's primary, the Federal Communications Commission on Thursday announced what one adocate called a "desperately needed" rule declaring such calls are illegal under federal law.
The FCC unanimously voted to adopt the declaratory ruling, saying calls like those made in New Hampshire are "artificial" under the Telephone Consumer Protection Act (TCPA).
The new rule goes into effect immediately, prohibiting people or groups from using voice cloning technology to create robocalls and giving state attorneys general civil enforcement authority.
According to the FCC, under the TCPA, the commission can also "take steps to block calls from telephone carriers facilitating illegal robocalls" and individual consumers or groups can bring a lawsuit against robocallers in court.
On Tuesday, the New Hampshire Department of Justice announced it had traced the robocalls from last month to a company called Life Corporation in Texas. The company made up to 25,000 of the calls.
Ishan Mehta, media and democracy program director for Common Cause, said the calls in New Hampshire last month represented "only the tip of the iceberg" and warned that "it is critically important that the FCC now use this authority to fine violators and block the telephone companies that carry the calls."
FCC Chairwoman Jessica Rosenworcel said that "bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We're putting the fraudsters behind these robocalls on notice."
Robert Weissman, president of consumer advocacy group Public Citizen, said the rule will "meaningfully protect consumers from rapidly spreading AI scams and deception" and urged other federal agencies "follow suit and apply the tools and laws at their disposal to regulate AI."
"We need Congress to prohibit bad actors from using deceptive AI to disrupt our elections. The FEC, too, must clarify regulatory language to ban the use of deliberately deceptive AI in campaign communications."
The TCPA, however, is "not enough to safeguard citizens and our elections" from the larger threat of deepfakes and AI, warned Weissman.
"The Telephone Consumer Protection Act applies only in limited measure to election-related calls," he said. "The act's prohibition on use of 'an artificial or prerecorded voice' generally does not apply to noncommercial calls and nonprofits. So the FCC's new rule will not cure the problem of AI voice-generated calls related to elections."
Public Citizen has repeatedly demanded that the Federal Election Commission (FEC) promptly regulate deepfake images and videos, which have already been used in campaign materials by former President Donald Trump, who is running for the Republican nomination.
Last month, the FEC said a decision on deepfakes is likely several months away.
On Thursday, Nick Penniman, founder of CEO of political reform group Issue One, called the FCC's decision "a positive step" that is "not enough."
"The unregulated use of AI as a means to target, manipulate, and deceive voters is an existential threat to democracy and the integrity of our elections. This is not a future possibility, but a present reality that demands decisive action," said Penniman. "We need Congress to prohibit bad actors from using deceptive AI to disrupt our elections. The FEC, too, must clarify regulatory language to ban the use of deliberately deceptive AI in campaign communications."
"These guardrails are vital to ensure we have the necessary tools to effectively counter this growing threat," he added, "and protect our elections."
Mehta called on Congress to pass the Protect Elections from Deceptive AI Act, which would prohibit the distribution ofdeceptive AI-generated audio, images, or video relating to federal candidates in political ads.
"We hope that both the House and the Senate will follow the example of the FCC," said Mehta, "whose Democratic and Republican commissioners recognized the threat posed by AI and came together in a unanimous vote to outlaw robocalls utilizing AI voice-cloning tools."
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
Just over two weeks after New Hampshire voters were inundated with artificial intelligence-generation robocalls featuring U.S. President Joe Biden's fake voice telling them not to vote in their state's primary, the Federal Communications Commission on Thursday announced what one adocate called a "desperately needed" rule declaring such calls are illegal under federal law.
The FCC unanimously voted to adopt the declaratory ruling, saying calls like those made in New Hampshire are "artificial" under the Telephone Consumer Protection Act (TCPA).
The new rule goes into effect immediately, prohibiting people or groups from using voice cloning technology to create robocalls and giving state attorneys general civil enforcement authority.
According to the FCC, under the TCPA, the commission can also "take steps to block calls from telephone carriers facilitating illegal robocalls" and individual consumers or groups can bring a lawsuit against robocallers in court.
On Tuesday, the New Hampshire Department of Justice announced it had traced the robocalls from last month to a company called Life Corporation in Texas. The company made up to 25,000 of the calls.
Ishan Mehta, media and democracy program director for Common Cause, said the calls in New Hampshire last month represented "only the tip of the iceberg" and warned that "it is critically important that the FCC now use this authority to fine violators and block the telephone companies that carry the calls."
FCC Chairwoman Jessica Rosenworcel said that "bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We're putting the fraudsters behind these robocalls on notice."
Robert Weissman, president of consumer advocacy group Public Citizen, said the rule will "meaningfully protect consumers from rapidly spreading AI scams and deception" and urged other federal agencies "follow suit and apply the tools and laws at their disposal to regulate AI."
"We need Congress to prohibit bad actors from using deceptive AI to disrupt our elections. The FEC, too, must clarify regulatory language to ban the use of deliberately deceptive AI in campaign communications."
The TCPA, however, is "not enough to safeguard citizens and our elections" from the larger threat of deepfakes and AI, warned Weissman.
"The Telephone Consumer Protection Act applies only in limited measure to election-related calls," he said. "The act's prohibition on use of 'an artificial or prerecorded voice' generally does not apply to noncommercial calls and nonprofits. So the FCC's new rule will not cure the problem of AI voice-generated calls related to elections."
Public Citizen has repeatedly demanded that the Federal Election Commission (FEC) promptly regulate deepfake images and videos, which have already been used in campaign materials by former President Donald Trump, who is running for the Republican nomination.
Last month, the FEC said a decision on deepfakes is likely several months away.
On Thursday, Nick Penniman, founder of CEO of political reform group Issue One, called the FCC's decision "a positive step" that is "not enough."
"The unregulated use of AI as a means to target, manipulate, and deceive voters is an existential threat to democracy and the integrity of our elections. This is not a future possibility, but a present reality that demands decisive action," said Penniman. "We need Congress to prohibit bad actors from using deceptive AI to disrupt our elections. The FEC, too, must clarify regulatory language to ban the use of deliberately deceptive AI in campaign communications."
"These guardrails are vital to ensure we have the necessary tools to effectively counter this growing threat," he added, "and protect our elections."
Mehta called on Congress to pass the Protect Elections from Deceptive AI Act, which would prohibit the distribution ofdeceptive AI-generated audio, images, or video relating to federal candidates in political ads.
"We hope that both the House and the Senate will follow the example of the FCC," said Mehta, "whose Democratic and Republican commissioners recognized the threat posed by AI and came together in a unanimous vote to outlaw robocalls utilizing AI voice-cloning tools."
Just over two weeks after New Hampshire voters were inundated with artificial intelligence-generation robocalls featuring U.S. President Joe Biden's fake voice telling them not to vote in their state's primary, the Federal Communications Commission on Thursday announced what one adocate called a "desperately needed" rule declaring such calls are illegal under federal law.
The FCC unanimously voted to adopt the declaratory ruling, saying calls like those made in New Hampshire are "artificial" under the Telephone Consumer Protection Act (TCPA).
The new rule goes into effect immediately, prohibiting people or groups from using voice cloning technology to create robocalls and giving state attorneys general civil enforcement authority.
According to the FCC, under the TCPA, the commission can also "take steps to block calls from telephone carriers facilitating illegal robocalls" and individual consumers or groups can bring a lawsuit against robocallers in court.
On Tuesday, the New Hampshire Department of Justice announced it had traced the robocalls from last month to a company called Life Corporation in Texas. The company made up to 25,000 of the calls.
Ishan Mehta, media and democracy program director for Common Cause, said the calls in New Hampshire last month represented "only the tip of the iceberg" and warned that "it is critically important that the FCC now use this authority to fine violators and block the telephone companies that carry the calls."
FCC Chairwoman Jessica Rosenworcel said that "bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We're putting the fraudsters behind these robocalls on notice."
Robert Weissman, president of consumer advocacy group Public Citizen, said the rule will "meaningfully protect consumers from rapidly spreading AI scams and deception" and urged other federal agencies "follow suit and apply the tools and laws at their disposal to regulate AI."
"We need Congress to prohibit bad actors from using deceptive AI to disrupt our elections. The FEC, too, must clarify regulatory language to ban the use of deliberately deceptive AI in campaign communications."
The TCPA, however, is "not enough to safeguard citizens and our elections" from the larger threat of deepfakes and AI, warned Weissman.
"The Telephone Consumer Protection Act applies only in limited measure to election-related calls," he said. "The act's prohibition on use of 'an artificial or prerecorded voice' generally does not apply to noncommercial calls and nonprofits. So the FCC's new rule will not cure the problem of AI voice-generated calls related to elections."
Public Citizen has repeatedly demanded that the Federal Election Commission (FEC) promptly regulate deepfake images and videos, which have already been used in campaign materials by former President Donald Trump, who is running for the Republican nomination.
Last month, the FEC said a decision on deepfakes is likely several months away.
On Thursday, Nick Penniman, founder of CEO of political reform group Issue One, called the FCC's decision "a positive step" that is "not enough."
"The unregulated use of AI as a means to target, manipulate, and deceive voters is an existential threat to democracy and the integrity of our elections. This is not a future possibility, but a present reality that demands decisive action," said Penniman. "We need Congress to prohibit bad actors from using deceptive AI to disrupt our elections. The FEC, too, must clarify regulatory language to ban the use of deliberately deceptive AI in campaign communications."
"These guardrails are vital to ensure we have the necessary tools to effectively counter this growing threat," he added, "and protect our elections."
Mehta called on Congress to pass the Protect Elections from Deceptive AI Act, which would prohibit the distribution ofdeceptive AI-generated audio, images, or video relating to federal candidates in political ads.
"We hope that both the House and the Senate will follow the example of the FCC," said Mehta, "whose Democratic and Republican commissioners recognized the threat posed by AI and came together in a unanimous vote to outlaw robocalls utilizing AI voice-cloning tools."