SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
The announcement came the same day that OpenAI—the company behind ChatGPT—unveiled a new tool called Sora that can generate a minute-long video from a written prompt, upping the regulatory stakes.
The Federal Trade Commission proposed a new rule on Thursday that would ban the impersonation of individuals, including with the use of artificial intelligence, or AI, technology.
The announcement came the same day that OpenAI—the company behind ChatGPT—unveiled a new tool called Sora that can generate a minute-long video from a written prompt, raising new concerns about how the technology might be abused to create deepfakes videos of real people doing or saying things they did not in fact do or say.
"Sooner or later, we need to adapt to the fact that realism is no longer a marker of authenticity," Princeton University computer science professor Arvind Narayanan toldThe Washington Post in response to Sora's emergence.
"Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI generated scams."
For its part, the FTC is mostly concerned about how technology can be used to fool consumers. In its announcement, the commission said that it had introduced the new rule for public comment because it had been getting a growing number of complaints about impersonation-based fraud, which has generated a "public outcry."
"Emerging technology—including AI-generated deepfakes—threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud," the commission said.
The proposed rule comes the same day as the FTC finalized a rule giving it the ability to seek financial compensation from scammers who impersonate companies or the government and builds on that regulation.
"Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever," FTC Chair Lina Khan said in a statement. "Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC's toolkit to address AI-enabled scams impersonating individuals."
The FTC also said that it wanted public comment on whether the rule should prohibit AI or other companies from knowingly allowing their products to be used by individuals who are in turn using them to commit fraud through impersonation.
Public Citizen, which has advocated for greater regulation of AI technology, welcomed the FTC's proposal.
"The FTC under Chair Kahn continues to be bold and use all the tools in their toolkit to protect consumers from emerging threats," Lisa Gilbert, executive vice president of Public Citizen, said in a statement. "Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI-generated scams."
OpenAI's preview of Sora raises the stakes in the debate surrounding AI regulation. So far, the technology is only being made available to certain professionals in film and the visual arts for feedback, as well as "red teamers—domain experts in areas like misinformation, hateful content, and bias"—to help assess risks, OpenAI said on social media.
"We'll be taking several important safety steps ahead of making Sora available in OpenAI's products," the company said.
One major concern surrounding deepfakes is that they could be used to manipulate voters in elections, including the upcoming 2024 presidential election in the U.S. The campaign of Florida Gov. Ron DeSantis, for example, raised alarms by using false images of former President Donald Trump embracing former White House Coronavirus Task Force chief Anthony Fauci in a video ad.
There are obvious errors in the Sora sample videos, as OpenAI acknowledged. Narayanan pointed out that a woman's right and left legs switch positions in a video of a Tokyo street, but also said that not every viewer might catch details like this and that the technology would likely be used to create harder-to-discredit deepfakes.
Another concern is the impact the technology could have on jobs and labor, especially in the arts. Director Michael Gracey, an expert on visual effects, told The Washington Post that the technology would likely enable a director to make an animated film on their own, instead of with a team of 100 to 200 people. The use of AI was a major sticking point in strikes by the Screen Actors Guild-American Federation of Television and Radio Artists and Writers Guild of America last year, as Oxford Internet Institute visiting policy fellow Mutale Nkonde pointed out. Nkonde told the Post she also worried about the technology being used to dramatize hateful or violent prompts.
"From a policy perspective, do we need to start thinking about ways we can protect humans that should be in the loop when it comes to these tools?" Nkonde asked.
Political revenge. Mass deportations. Project 2025. Unfathomable corruption. Attacks on Social Security, Medicare, and Medicaid. Pardons for insurrectionists. An all-out assault on democracy. Republicans in Congress are scrambling to give Trump broad new powers to strip the tax-exempt status of any nonprofit he doesn’t like by declaring it a “terrorist-supporting organization.” Trump has already begun filing lawsuits against news outlets that criticize him. At Common Dreams, we won’t back down, but we must get ready for whatever Trump and his thugs throw at us. Our Year-End campaign is our most important fundraiser of the year. As a people-powered nonprofit news outlet, we cover issues the corporate media never will, but we can only continue with our readers’ support. By donating today, please help us fight the dangers of a second Trump presidency. |
The Federal Trade Commission proposed a new rule on Thursday that would ban the impersonation of individuals, including with the use of artificial intelligence, or AI, technology.
The announcement came the same day that OpenAI—the company behind ChatGPT—unveiled a new tool called Sora that can generate a minute-long video from a written prompt, raising new concerns about how the technology might be abused to create deepfakes videos of real people doing or saying things they did not in fact do or say.
"Sooner or later, we need to adapt to the fact that realism is no longer a marker of authenticity," Princeton University computer science professor Arvind Narayanan toldThe Washington Post in response to Sora's emergence.
"Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI generated scams."
For its part, the FTC is mostly concerned about how technology can be used to fool consumers. In its announcement, the commission said that it had introduced the new rule for public comment because it had been getting a growing number of complaints about impersonation-based fraud, which has generated a "public outcry."
"Emerging technology—including AI-generated deepfakes—threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud," the commission said.
The proposed rule comes the same day as the FTC finalized a rule giving it the ability to seek financial compensation from scammers who impersonate companies or the government and builds on that regulation.
"Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever," FTC Chair Lina Khan said in a statement. "Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC's toolkit to address AI-enabled scams impersonating individuals."
The FTC also said that it wanted public comment on whether the rule should prohibit AI or other companies from knowingly allowing their products to be used by individuals who are in turn using them to commit fraud through impersonation.
Public Citizen, which has advocated for greater regulation of AI technology, welcomed the FTC's proposal.
"The FTC under Chair Kahn continues to be bold and use all the tools in their toolkit to protect consumers from emerging threats," Lisa Gilbert, executive vice president of Public Citizen, said in a statement. "Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI-generated scams."
OpenAI's preview of Sora raises the stakes in the debate surrounding AI regulation. So far, the technology is only being made available to certain professionals in film and the visual arts for feedback, as well as "red teamers—domain experts in areas like misinformation, hateful content, and bias"—to help assess risks, OpenAI said on social media.
"We'll be taking several important safety steps ahead of making Sora available in OpenAI's products," the company said.
One major concern surrounding deepfakes is that they could be used to manipulate voters in elections, including the upcoming 2024 presidential election in the U.S. The campaign of Florida Gov. Ron DeSantis, for example, raised alarms by using false images of former President Donald Trump embracing former White House Coronavirus Task Force chief Anthony Fauci in a video ad.
There are obvious errors in the Sora sample videos, as OpenAI acknowledged. Narayanan pointed out that a woman's right and left legs switch positions in a video of a Tokyo street, but also said that not every viewer might catch details like this and that the technology would likely be used to create harder-to-discredit deepfakes.
Another concern is the impact the technology could have on jobs and labor, especially in the arts. Director Michael Gracey, an expert on visual effects, told The Washington Post that the technology would likely enable a director to make an animated film on their own, instead of with a team of 100 to 200 people. The use of AI was a major sticking point in strikes by the Screen Actors Guild-American Federation of Television and Radio Artists and Writers Guild of America last year, as Oxford Internet Institute visiting policy fellow Mutale Nkonde pointed out. Nkonde told the Post she also worried about the technology being used to dramatize hateful or violent prompts.
"From a policy perspective, do we need to start thinking about ways we can protect humans that should be in the loop when it comes to these tools?" Nkonde asked.
The Federal Trade Commission proposed a new rule on Thursday that would ban the impersonation of individuals, including with the use of artificial intelligence, or AI, technology.
The announcement came the same day that OpenAI—the company behind ChatGPT—unveiled a new tool called Sora that can generate a minute-long video from a written prompt, raising new concerns about how the technology might be abused to create deepfakes videos of real people doing or saying things they did not in fact do or say.
"Sooner or later, we need to adapt to the fact that realism is no longer a marker of authenticity," Princeton University computer science professor Arvind Narayanan toldThe Washington Post in response to Sora's emergence.
"Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI generated scams."
For its part, the FTC is mostly concerned about how technology can be used to fool consumers. In its announcement, the commission said that it had introduced the new rule for public comment because it had been getting a growing number of complaints about impersonation-based fraud, which has generated a "public outcry."
"Emerging technology—including AI-generated deepfakes—threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud," the commission said.
The proposed rule comes the same day as the FTC finalized a rule giving it the ability to seek financial compensation from scammers who impersonate companies or the government and builds on that regulation.
"Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever," FTC Chair Lina Khan said in a statement. "Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC's toolkit to address AI-enabled scams impersonating individuals."
The FTC also said that it wanted public comment on whether the rule should prohibit AI or other companies from knowingly allowing their products to be used by individuals who are in turn using them to commit fraud through impersonation.
Public Citizen, which has advocated for greater regulation of AI technology, welcomed the FTC's proposal.
"The FTC under Chair Kahn continues to be bold and use all the tools in their toolkit to protect consumers from emerging threats," Lisa Gilbert, executive vice president of Public Citizen, said in a statement. "Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI-generated scams."
OpenAI's preview of Sora raises the stakes in the debate surrounding AI regulation. So far, the technology is only being made available to certain professionals in film and the visual arts for feedback, as well as "red teamers—domain experts in areas like misinformation, hateful content, and bias"—to help assess risks, OpenAI said on social media.
"We'll be taking several important safety steps ahead of making Sora available in OpenAI's products," the company said.
One major concern surrounding deepfakes is that they could be used to manipulate voters in elections, including the upcoming 2024 presidential election in the U.S. The campaign of Florida Gov. Ron DeSantis, for example, raised alarms by using false images of former President Donald Trump embracing former White House Coronavirus Task Force chief Anthony Fauci in a video ad.
There are obvious errors in the Sora sample videos, as OpenAI acknowledged. Narayanan pointed out that a woman's right and left legs switch positions in a video of a Tokyo street, but also said that not every viewer might catch details like this and that the technology would likely be used to create harder-to-discredit deepfakes.
Another concern is the impact the technology could have on jobs and labor, especially in the arts. Director Michael Gracey, an expert on visual effects, told The Washington Post that the technology would likely enable a director to make an animated film on their own, instead of with a team of 100 to 200 people. The use of AI was a major sticking point in strikes by the Screen Actors Guild-American Federation of Television and Radio Artists and Writers Guild of America last year, as Oxford Internet Institute visiting policy fellow Mutale Nkonde pointed out. Nkonde told the Post she also worried about the technology being used to dramatize hateful or violent prompts.
"From a policy perspective, do we need to start thinking about ways we can protect humans that should be in the loop when it comes to these tools?" Nkonde asked.