

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"They've built a billion-dollar industry on stolen voices because they thought no one would make them pay for it," said a lawyer for the plaintiffs.
In yet another display of how Illinois' pioneering biometric privacy law can be used to protect Americans, state residents who work as audio storytellers, broadcast journalists, podcasters, voice actors, and more filed class-action lawsuits against Big Tech this week for "stealing their voices" to develop artificial intelligence products.
Since Illinois legislators passed the groundbreaking Biometric Information Privacy Act (BIPA) in 2008—regulating the collection, use, safeguarding, handling, storage, retention, and destruction of biometric identifiers, including fingerprints, voiceprints, and scans of a retina, iris, hand, or face geometry—there have been thousands of lawsuits filed and major settlements with Clearview AI, Facebook, and Six Flags.
Represented by the award-winning civil rights firm Loevy + Loevy, the Illinoisans are suing Adobe, Alphabet and its subsidiary Google, Apple, Amazon, ElevenLabs, Facebook parent company Meta, Microsoft, NVIDIA, and Samsung under BIPA.
The plaintiffs are audiobook narrators Lindsay Dorcus and Victoria Nassif as well as journalists Robin Amer, Yohance Lacour, Carol Marin, and Phil Rogers. Journalist Alison Flowers is part of all lawsuits except those against Amazon and Apple. Their lawyers noted that "between them, they have multiple Emmy and Peabody awards, several Pulitzer Prizes, several Alfred I. duPont-Columbia University awards, an Edward R. Murrow award, a James Beard award, a SOVAS award, and many, many other honors."
Their cases focus on the voiceprint of each plaintiff, which is "a digital fingerprint of the human voice," as the complaints explain. "It is a mathematical capture of the acoustic features—pitch, timbre, resonance—that emerge from a person's distinctive physiology, combined with the speech patterns that person develops over a lifetime: accent, cadence, articulation. Like a fingerprint, a voiceprint identifies the individual. Like a fingerprint, it cannot be changed."
The Adobe case targets Firefly, the company's family of generative AI models. The complaint states that the company "treated the human voices that built Firefly as ownerless—ignoring the speakers' rights, taking their voiceprints without asking, paying them nothing, and giving them no notice that their voices were being used at all, and "built a mirage of commercial safety around products whose construction violated the one thing Illinois law requires before collecting a voiceprint: consent from the person."
The Google filing points out that the company "has been a repeat defendant in BIPA cases" and even "paid approximately $100
million to settle BIPA claims arising from Google Photos' face grouping feature," among other high-profile settlements.
The Meta suit highlights that "no defendant in any biometric-privacy matter pending in the United States has had more direct, more sustained, or more financially consequential notice of BIPA than Meta," given that the company "has paid the three largest biometric-privacy settlements in American history," including $650 million to resolve claims under the Illinois law regarding Facebook's photo tag suggestions.
"By the time Meta released Voicebox in June 2023, MMS in May 2023, and SeamlessM4T in August 2023, Meta had been a BIPA defendant for nearly a decade and had paid more than $2 billion in biometric-privacy settlements," the complaint continues. "The technology Meta built using plaintiffs' voices now competes with plaintiffs in the markets where they earn their living."
The Amazon filing details similar harm to plaintiffs:
Amazon extracted plaintiffs' voiceprints without notice or consent, depriving them of the right BIPA guarantees to make an informed decision about the collection and use of their biometric data. Amazon retains those voiceprints in its commercial models and continues to profit from them. Amazon has further disseminated those voiceprints, encoded in model parameters, through its cross-affiliate, subprocessor, and integration-partner networks. The technology built on those voiceprints now displaces plaintiffs in the markets where they earn their living—the broadcast journalism, investigative podcast, audiobook narration, voiceover, and voice performance markets that the voice products are designed and sold to serve.
"What we are seeing is an illegal and unethical exploitation of talent on a massive scale, and one of the largest violations of biometric privacy ever committed," said Loevy + Loevy attorney Ross Kimbarovsky in a Thursday statement.
"The legislators who wrote and passed BIPA had the foresight to realize that biometric privacy was going to be a major civil rights issue in the 21st century," the attorney continued. "Social security numbers can be changed, passwords can be reset, and credit cards can be canceled, but once your biometric data is compromised, there's nothing you can do about it."
"These companies know the law, know their liability, and know exactly how to build consent systems that comply with BIPA," Kimbarovsky added. "They've built a billion-dollar industry on stolen voices because they thought no one would make them pay for it."
In addition to Illinois, Texas and Washington state have enacted biometric privacy laws, while California, Colorado, Connecticut, Utah, and Virginia have comprehensive consumer protection policies that apply to such information, according to Bloomberg Law. However, efforts in Congress to enact federal legislation—such as the National Biometric Information Privacy Act and the Facial Recognition and Biometric Technology Moratorium Act—have been unsuccessful.
"It is unthinkable and irresponsible to release technologies capable of destabilizing critical systems and then worry about the fallout afterward," said one expert.
Watchdog group Public Citizen is raising alarms after tech giant Google on Monday revealed that a group of criminal hackers used artificial intelligence to detect a previously unidentified software vulnerability.
As reported by The New York Times, Google said that it had "high confidence" that the hackers used AI to discover and exploit the vulnerability.
While Google said that the attack had been thwarted, the Times noted that the company "did not say precisely when the thwarted attack happened, whom it was targeting, or which AI platform the hackers used."
While the discovery of so-called "zero-day vulnerabilities" were once a rare occurrence, the proliferation of AI models has made them much easier for hackers to detect. In fact, AI software vendor Anthropic earlier this year said that it had developed a model that was so good at exploiting these vulnerabilities that it would not be releasing it publicly.
John Hultquist, chief analyst at Google Threat Intelligence Group, said in an interview with Cyberscoop that this kind of AI-assisted attack "is probably the tip of the iceberg and it’s certainly not going to be the last" to occur.
“The game’s already begun and we expect the capability trajectory is pretty sharp,” Hultquist explained. “We do expect that this will be a much bigger problem, that there will be more devastating zero-day attacks done over this, especially as capabilities grow.”
JB Branch, AI governance and technology policy counsel at Public Citizen, said the attempted AI exploit once against showed how reckless Big Tech has been in aggressively pushing this technology out the door.
"Cybersecurity experts are sounding the alarm, yet AI companies continue racing to release increasingly powerful models with little regard for the societal consequences," Branch said. "It is unthinkable and irresponsible to release technologies capable of destabilizing critical systems and then worry about the fallout afterward."
Branch also said it was well past time for Congress to step in and slap strict guardrails on the development of AI.
"We need enforceable AI regulations that require rigorous safety testing, independent review, and meaningful oversight before these systems ever reach the public," he said. "Regulators cannot remain in a perpetual game of catch-up while Big Tech gambles with the safety and stability of modern society."
While calls for more AI regulation have grown in recent months, Silicon Valley elites are planning to spend massive sums of money in this year's midterm elections to prevent candidates who support AI regulation from winning public office.
Leading the Future—a super political action committee (PAC) backed by venture capital firm Andreessen Horowitz, Palantir co-founder Joe Lonsdale, and other AI heavyweights—is spending at least $100 million to elect lawmakers who aim to pass legislation that would set a single set of AI regulations across the US, overriding any restrictions placed on the technology by state governments.
"AI is the most far-reaching and pivotal technological revolution in the history of humanity," notes the Sanders Institute. "The choices we make now will determine whether those changes make the world better or worse."
“You know you're in trouble when you can't describe reality without sounding crazy.”
That's how renowned author and activist Naomi Klein described society's relationship with rapidly—some say dangerously—evolving artificial intelligence technology during a Tuesday livestreamed panel discussion with Sen. Bernie Sanders (I-Vt.) and Rep. Ro Khanna (D-Calif.) hosted by the Sanders Institute.
Khanna and Klein are both fellows at the institute, cofounded by Sanders' (I-Vt.) wife and son, Jane O'Meara Sanders and David Driscoll. The Sanders Institute over recent years has convened an array of conferences and events focused on bringing together the best minds, top experts, and policy advocates on a host of issues.
“This AI and robotics revolution is the most sweeping technological change that the world has ever seen,” said Sanders. “People talk about the changes that the Industrial Revolution brought, which were profound. This is going to move a lot faster, with a lot more impact.”
“This revolution is being pushed by the wealthiest people in the world,” Sanders continued. “We’re talking about Elon Musk, Mark Zuckerberg, Jeff Bezos, Peter Thiel, and other multi-multi-billionaires who are spending hundreds and hundreds if not trillions of dollars combined trying to do the research and the implementation for these technologies.”
Turning to Khanna and Klein, the senator asked: “What are the motives of these guys? Do the American people think that Jeff Bezos and Elon Musk are sitting up nights saying, ‘Wow, we got this technology, we're going to improve life for working people?’”
Klein contended that “their motives are exactly the opposite, and they're very blunt about this, that they are in a race to reach something that they call AGI—artificial general intelligence—or even something beyond that, superintelligence.”
While agreeing with Sanders that AI will prove as transformative as the Industrial Revolution, Klein underscored one big difference between the two.
“Unlike the Industrial Revolution, which created huge numbers of jobs, the goal of this revolution is to eliminate jobs,” the Shock Doctrine author explained. “They've been absolutely transparent about what they want to achieve, which is a jobs apocalypse. They want to be free from their workers."
"They really don't like it when their workers organize and push back, whether in unions or outside of unions," Klein added. "And I think that's part of the appeal of AI for these guys, is the idea that they could become trillionaires with virtually no employees.”
Khanna, a potential 2028 presidential candidate who authored the book Progressive Capitalism: How to Make Tech Work for All of Us, has been a leading voice in the US House of Representatives on the issue of AI. The congressman pointed out that tech titans are “using technology to eliminate workers and maximize their profits, and if you look at the Industrial Revolution, for 60 years, worker wages fell… even as Britain became wealthy."
"And so the question, in my view, for AI is, are we going to let a few billionaires, trillionaires, call the shots, or are we going to make sure that the technology is actually used in any way to enhance workers, to enhance total productivity?” he asked.
Sanders noted that Bezos, Amazon's founder, "wants to raise $100 billion to do what? To automate factories in America and around the world."
"You know what that means? It means there will no longer be manufacturing jobs in the United States or in warehouses," the senator added. "He wants to get rid of the 600,000 Amazon workers and replace them with robots. Elon Musk is converting Tesla partially to a robotics company. He wants to produce a million robots a year… What do you think a robot is there for? It's to replace a union worker.”
Klein said that “if we lived in a world that took care of people… [where] if a job was eliminated, people had a guaranteed income, they knew that they had healthcare, they knew that they weren't going to get evicted, we'd be having a different conversation.”
It may be more than just jobs that are eliminated if humanity does not proceed with utmost caution.
Sanders cited AI pioneers like Geoffrey Hinton who have warned that superintelligent artificial intelligence could wipe out humanity. According to Hinton and others, the senator explained, “it’s not a question of if, it’s a question of when [AI] will become smarter than human beings, and the fear of these guys, which used to be science fiction, is that AI will essentially establish its independence from human control in order to protect itself... raising the possibility of horrific things happening.”
Khanna agreed that such an outcome is “a real risk" as countries remove guardrails to breakneck AI development with the excuse that if they don't do it, their rivals will—the same dangerous thinking that fueled the Cold War nuclear arms race between the US and Soviet Union.
“I don't know whether it will happen or not, but why would we not take every precaution to make sure it doesn’t?” the congressman asked. “And this is what I don't understand, when people say, ‘Well, we want to compete with other nations and have a race to the bottom."
While the specter of an AI apocalypse is growing, it remains much more a reflection of human anxieties that any sort of impending threat. The same cannot be said for lethal autonomous weapon systems—better known as killer robots, which are defined as arms that can operate without any meaningful human control.
Activists like those at the Campaign to Stop Killer Robots have long sounded the alarm on the development of weapons that can operate without human control. However, Khanna said that human decision-making alone “is not enough.”
“If AI is doing all the data analysis and saying, OK, here's the target, and you just have a human being saying, OK, I'm the one who's going to give the order [to attack]… well, there's a human last-minute judgment,” he said. "What's happened is just a dependence on these machines."
As an example, Khanna pointed to what he said was the US military's use of AI that “gave the target of the school” in southern Iran where 168 children and staff were massacred in a February 28 cruise missile strike.
Sanders raised the possibility that a future in which robots largely replace humans on the battlefield “makes it easier” for countries with such technology to wage war.
However, Khanna countered that such conflicts are “deeply asymmetrical," meaning that they're only "easier" for the more technologically advanced side.
“The United States can have drones and technology, and Israel can do that,” the congressman said. “But the people who were killed in what I call the genocide in Gaza, 70,000 people, they don't have that technology. The starving people in Cuba, because of our fuel blockade, don't have that technology. The people in Iran who were killed don't have that technology."
"So you have one side of political leadership in our country that doesn't have to worry as much about deaths for our people," he contended. "But then there’s no… moral deliberation about the dignity and worth of people who were killed.”
While such life-and-death matters are far removed from the reality of most Americans’ lives, the panelists gave examples of how AI is impacting everyday citizens and their privacy.
“We heard reports from a lot of people on the ground who were standing up to ICE,” Klein said, referring to the nationwide protests and individual acts of resistance against Immigration and Customs Enforcement and the Trump administration’s overall anti-immigrant blitz.
“They were having these very creepy experiences where ICE knew their names before they had said anything. They knew where they lived before they said anything," she added. "Scanning a face, scanning a license plate.”
Not everyone attends protests. But nearly everyone uses the internet and its accoutrements; most notably, social media. To that end, Khanna said that Big Tech isn’t just “taking our data, they’re trying to figure out what we think.”
“We've had no pushback to these companies,” he continued. “They have a profit motive to do this. They have a profit motive to get us as addictive to screen time as possible."
"They’re targeting young people… especially young girls that have had eating disorders... and suicidal thoughts because of the junk they've been fed," Khanna noted, calling the situation “a dereliction of Congress.”
“We have not passed any privacy legislation or restrictions really on social media companies as they've had total carte blanche to do what they want,” he said.
Sanders said that “to my mind, it is very clear why Congress is not dealing with this issue, and that is the power and the wealth of people who do not want us to deal with it.”
“To the best of my understanding, as of now, just for the 2026 elections, AI has already put $400 million into elections, and we've go… five to six more months to go,” he explained. “So let's assume that any candidate who gets up there and says, ‘You know, I have some real concerns about AI, let's slow it down, let's make it work for people rather than Elon Musk,’ that candidate will have billions of dollars thrown at him or her, which speaks to a corrupt campaign finance [system].”
Klein has similarly sounded the alarm about far-right tech oligarchs, including in a "must-read" essay with Astra Taylor about the fight against "end times fascism" published by The Guardian last year. The pair plans to release a related book in September.
“If we look at these Silicon Valley billionaires who lined up behind [President Donald] Trump during the election campaign… if you listen to what they have been saying about why they flipped, a lot of it was because there were some gentle regulations on crypto and AI during the Biden administration, including things like trying to figure out how to prevent AI from killing us all, and keeping it away from nuclear weapons," Klein said during Tuesday's panel. "Really sort of sensible policy… Apparently this was too much.”
While Congress fails to act, the people are stepping up.
“What we are seeing all over this country, from conservative areas, in progressive areas, [is] people saying, hey, thank you very much, we prefer not to have a data center in our community,” said Sanders—who recently introduced the Artificial Intelligence Data Center Moratorium Act with Rep. Alexandria Ocasio-Cortez (D-NY)—pointing to one example of people-powered victories.
“So this is really an unprecedented grassroots revolt, not only against the data centers, but against this whole idea... of very, very wealthy people operating in a secretive mode, pushing through what they want against the needs of ordinary people,” he added.
Klein said that “we need to have a national and international conversation, because these are global technologies, about how we can use these very powerful tools to make our lives better, to enhance life, to have a human-first AI policy.”
“And that means that we look at it holistically,” she continued. “We figure out how we do it in the least resource-intensive way to have the best results. And then it isn't about turning a bunch of guys into trillionaires.”
“It's about what kind of society we want to live in, how we want to treat each other, how we want to protect the natural world,” Klein added. “I think we should be having town hall conversations about it, and we might find out that we have more in common with our neighbors than we thought."