

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
Given the speed of AI’s development and its ubiquity, relying on companies to self-regulate is like closing the computer laptop after the deepfakes have been posted.
The explosion of AI into the marketplace has led to fears that workers, including white collar workers, will soon become obsolete; that Big Tech firms will control more and more property including intellectual property; that AI data centers will require so much energy as to overwhelm small communities, raise electricity prices, and accelerate global warming; and that the ongoing gathering of money, power,and software in the hands of tech billionaires will enable them to control political discourse and surveil the masses. Critics rightfully worry about AI upsetting social conventions, invading personal privacy, destroying jobs by making workers redundant, and challenging social mores.
When considered soberly, the risks of AI are the risks that accompany any new technology: reinforced racial bias and discrimination, economic inequality, deskilling of workers, and misinformation and manipulation that reflect existing power structures. Already pervasive society-wide gender and racial biases are reinforced in AI. The demographic of those programming AI systems are overwhelmingly white men, leading to biases in the development of AI tools, cybersecurity systems, policing software, and cameras.
AI has become a powerful force even in the area of pornography, where the dangers that accompany its spread illuminate the risks of the diffusion of AI generally. The shocking impacts include deepfakes (the artificial use of images to embarrass or hurt others) and child abuse. Elon Musk’s “Grok” app is allowing users to undress anyone including minors, while “X” refuses to take action. The American Federation of Teachers left “X” because of its dissemination of “sickening” images of children in various states of nudity.
These worries are playing out against the backdrop of the Epstein sexual predator scandal that also involves modern technology, wealth, and privileged men. It is reflected in the unfettered development of pornographic applications, too many of which thrive on sexual exploitation of women and children. In the US the determination of President Donald Trump to avoid regulations of AI at the urgings of industry thus becomes a greater danger. The spread of risky AI pornography results not from the unfettered prurient interests of purveyors and users, nor from a lack of moral safeguards, but from a failure of governance and unwillingness to stifle profit in the name of free speech.
The exploitation of women’s sexual images without consent, coupled with the lack of robust oversight or age verification for mainstream platforms, perpetuates a cycle of harm.
In order to exert proper controls on the dark, abusive side of AI porn—and AI generally—we must understand what it is, how it developed, and how it might be controlled. Pornographic content has had a major presence in erotic and bawdy books and magazines over the centuries. You might say it became mainstream with Geoffrey Chaucer’s Canterbury Tales (late 14th century), although the modern notion of pornography arose in the mid-19th century. The internet enabled a pornography boom by bringing it to any computer and eventually to any cell phone. If porn was expensive to produce, it generated high income. This stimulated further development of internet platforms where it is both pervasive and free. Rather than selling copies of videos, industry cleverly embraced online platforms to create multiple income streams through blind links, pop-up windows, pay-per-click ads, and by sharing of traffic with other sites.
AI and such associated technologies as handheld electronic cameras and web pages have transformed the porn industry from being large and studio centered to being a cottage industry for virtually any tube site, small warehouse, or apartment. But Big Tech dominates. Of over 1 billion websites, of which less than 200 million are active, at least 4% are porn related, and perhaps as many as 12%. By usage, even more of the net is related to pornography, perhaps 30% of the internet’s data usage, with raw bandwidth usage six times larger than for Hulu or Youtube. MindGeek, the owner of several of the most visited sites including Pornhub, RedTube, and YouPorn, is a dominant force. Between 2013 and 2019, the number of visits registered in Pornhub grew threefold from 14.7 to 42 billion, and it is increasingly originating from mobile devices; in January 2024 alone there were 11.4 billion mobile visits worldwide.
The majority of users are male.
All of these visits to porn sites generate huge profits, well over $100 billion worldwide annually. For perspective: these profits are greater than those for Apple, GM, and other major corporations. By the 2020s the top porn producing countries were: the United States, at 24.5%; the United Kingdom, 5.5%; and Germany, Brazil, France, and Russia at between 4% and 5%. The vibrant OnlyFans site, in which performers own their own content, reported $7.22 billion in gross revenue in 2024. During the Covid-19 pandemic, as isolated individuals turned to the web for sexual comfort, OnlyFans gross revenue rose 118%, followed by annual increases of 16% and 19% in 2022 and 2023, respectively.
The development of AI-generated pornography moved hand in hand with the rise of generative artificial intelligence. Much of the material is artificial, or at the very least enhanced. Many publicly accessible AI models generate text, audio, and images across the entire human spectrum of activities. They include ChatGPT, Gemini, DeepSeek DALL-E, and Midjourney which have content moderation systems to prevent the creation of sexually explicit material. But a large volume of the output is deepfakes and child pornography, both of which have generated outrage and calls for its control, if not outright illegalization, and its rapid removal from the worldwide web. And moderation works only so far.
As quickly as new AI programs are developed, work-arounds to the restrictions are found. A separate market for so-called unmoderated or uncensored generative AI tools has also emerged which enables production of sexually explicit content through web and app interfaces. As examples: Dreampress.ai and MySpicyVanilla.com prompt erotic stories, while PornPen.ai, Pornderful.ai, Unstability.ai, and other apps enable pornographic images or videos. The exploitation of women’s sexual images without consent, coupled with the lack of robust oversight or age verification for mainstream platforms, perpetuates a cycle of harm.
By now websites dedicated to AI-generated adult content have spread into the mainstream where they may promote predation. They are first of all businesses dedicated to generating market interest and making profit, not in self-regulation. Drawing on huge libraries and data sets, they enable users to customize their preferences for body type; facial features; such enhancements as implants, tattoos, and piercings; kinds of encounters and positions; and fetishes. From the privacy of one’s domain, a user can thereby have sexual encounters, thinking he may do so without endangering others or himself.
Ultimately, however, AI pornography distorts human sexuality, because everything is on demand and seemingly risk free. It trains desire without reciprocity. It erodes the human capacity for negotiation, refusal, and mutual recognition. What looks like personalization of preference is actually the substitution of a screen for a living, feeling autonomous partner. Thus, AI porn is less about sex than about power: It teaches users to expect intimacy without vulnerability and especially without responsibility, and it facilitates abuse of women and girls.
Because of the ease of production, the amorality of website owners, and the lack of regulation, there has been limited progress in fighting deepfakes.
This terrible reality plays out with respect to deepfakes. Deepfakes make it possible for people to create naked photos or videos of someone, then to use the artificial pornography to embarrass, blackmail, or otherwise hurt her (him). “Nudify” sites have proliferated rapidly, allowing millions of people to create nonconsensual images. Apps like DeepSwap and Face Swapping, which enable users to swap out faces in a video with a different face obtained elsewhere, have proliferated since the emergence of generative AI three years ago. Digitally edited pornographic videos featuring the faces of hundreds of non-consenting women get tens of millions of visitors on websites.
Deepfakes are a “new method to deploy gender-based violence and erode women’s autonomy in their on-and-offline world.” In fact, in 2023, 98% of 95,820 deepfakes online were pornographic and 99% of those videos targeted women. To facilitate targeting, AI entrepreneurs created a website, MrDeepFakes, to which altered images have been uploaded for viewing and purchase. Deepfakes may be used as “revenge porn” when a jilted suitor determines to abuse an acquaintance by posting nonconsensual intimate AI images. As Paris Hilton recently testified on Capitol Hill about her experience with a private video gone public: “People called it a scandal. It wasn’t. It was abuse.”
As a result, there has been a sharp increase in crimes targeting children on the internet (online enticement, AI abuse, and trafficking). Reports of generative artificial intelligence (GAI) related to child sexual exploitation have skyrocketed from 6,835 reports to 440,419 in the last year alone. In the past few years in the US, 93.5% of individuals sentenced for sexual abuse were men, 67% of the cases involving child pornography were white men, and 95% were US citizens. In February 2025 Europol busted a criminal gang that was distributing AI-generated images of child sexual abuse online. Abusive behavior extends to secondary schools where students produce deepfake nude photos of their classmates with the help of AI. Boys are much more likely to generate a deep nude photo than girls. But because of the ease of production, the amorality of website owners, and the lack of regulation, there has been limited progress in fighting deepfakes.
In response to public outcry over perceived dangers of recombinant DNA research in the 1970s, the Cambridge, Massachusetts City Council voted to restrict work at MIT and Harvard laboratories. The vote, and concerns of molecular biologists themselves, led the burgeoning rDNA industry to adopt safety regulations on its own. In AI, too, the industry is by and large self-regulated to guard against misuse, disarm public interference, and ensure booming business opportunities. However, given the speed of AI’s development and its ubiquity, such a decision to self-regulate is like closing the computer laptop after the deepfakes have been posted.
A number of social media platforms and AI companies voluntarily introduced regulations and standards to limit hate speech, and combat incitement to violence against specific groups, genders, and orientations. More recently, many of these safeguards have been removed in the name of free speech and the right of the public to information. This has resulted in an explosion in hate speech, racism, and deepfakes. For example, after its acquisition by Elon Musk, Twitter took longer to review hateful content and remove it, an unsurprising result given that Musk fired thousands of employees who were responsible for moderation. He also has a misogynist view of women (whom he called “womb-creatures”), and he publicly saluted the Nazis who, he believes, merit a platform. Homophobic, transphobic, and racist hate speech on Twitter increased 50% under his ownership.
Similarly, in keeping with his quasi-libertarian views of free speech, Musk has refused to reign in Grok, his AI tool. Grok has a “Spicy” option that is being used to produce disgusting photographs of women and children in sexually compromising, explicit, and abusive situations. X officially allows pornographic content on its platform, too, but says it will block adult and violent posts from being seen by users who are under 18 or who do not opt in to see it. Shockingly, US Defense Secretary Pete Hegseth plans to integrate Grok into Pentagon networks, including classified systems, as part of a broader initiative to incorporate AI technology across the military. Does Hegseth have in mind the production of military deepfakes?
Having captured Trump’s fumbling mind, the massive AI industry has convinced the president to oppose meaningful local, state, and national laws to avoid “onerous” interference with commerce that may slow innovation. This lack of regulation has spilled over into AI and pornography. The technological billionaires who promote and sell AI applications in pornography may not understand or care about the abuse and suffering of women and children that has resulted from their apps. After all, Elon Musk, Bill Gates, Donald Trump, Howard Lutnik, Sergey Brin, Reid Hoffman, and many more techno billionaires in government and industry have been linked directly to the Epstein scandal. There is no suggestion of any wrongdoing in the heavily redacted files released by the US Department of Justice that these men committed sex crimes. But what do these contacts say about their attitudes toward women and children and what has been the result?
The Internet Watch Foundation (IWF) has found thousands of AI-generated pictures online involving the sexual abuse of children. Such groups as the Sexual Violence Prevention Association have demanded stricter controls on AI image tools, swift takedown mechanisms, and legal action against those generating and circulating abusive content. But the number of realistic images, nearly all of which involve girls, skyrockets annually. Perpetrators easily download open-source AI models to their computers and quickly evade safeguards.
Confronting the purveyors of abusive AI and fighting immoral profit works.
Deepfakes might be addressed through such regulatory initiatives as the California AI Transparency Act, the Take It Down Act, the EU AI Act, and the UK Online Safety Act 2023. In 2024 the Czech Justice Ministry acted to amend a law that would make deepfake porn a criminal offense and make it easier for victims to defend themselves. The European Union has taken steps to address cyberstalking, online harassment, and incitement to hatred and violence. Unfortunately, enforcement remains inconsistent. For example, Scotland’s 2021 hate speech law criminalizes incitement to prejudice hatred, but excludes misogynistic hate.
Confronting the purveyors of abusive AI and fighting immoral profit works. Age and prior consent verification and other checks are always technically feasible to prevent abusive AI porn. Listening to pressure from anti-porn advocacy groups, Visa and Mastercard finally refused to accept payments from Pornhub, the world’s leading porn site, after a New York Times report that documented abuse and rape. This did more to slow Pornhub’s damaging practices than did years of content moderation. Ultimately, however, platforms face little accountability for hosting harmful content or for profiting from it.
CEO of OpenAI, Sam Altman, believes in treating “adult users like adults” with some age-gating, but little control. Many apps and sites hire armies of content moderators to catch illegal and offensive content. But we have seen how Musk’s decision to fire moderators led to an increase in violent hate speech. OpenAI thus is actively recruiting a “head of preparedness”—a well-paid human—to address the “real challenges” of AI models. He had in mind the “potential impact of models on mental health” and other models that can find “critical vulnerabilities” that attackers intend to use for harm. Altman’s announcement followed growing concern over the impact of AI chatbots on mental health, with lawsuits alleging that OpenAI’s ChatGPT “reinforced users’ delusions, increased their social isolation, and led some individuals to suicide.”
Like any other technological advance whose promoters have promised revolutionary changes in society and whose detractors have worried about the potential for moral, cultural, and social collapse, AI, in all of its applications, is a human technology, one that will be embraced and applied in human ways. The internet gives an open microphone to voices of anger and reason, to racism and equality, to raw pornographic images and erotic art with few filters. The Luddites of the early 19th century, the factory workers of the mid-20th century, and the more modern critics of robotics have long worried about their inevitable replacement by machines. Now AI has replaced pornographic models. Surely, the next steps require human analysis and intervention that machines, AI, and its billionaire owners can never provide.
"Big Tech companies have spent the past year cozying up to Trump," said one critic, "and this is their reward. It’s a fabulous return on a very modest investment—at the expense of all Americans.”
The White House is rapidly expanding on its efforts to stop state legislatures from protecting their constituents by passing regulations on artificial intelligence technology, with the Trump administration reportedly preparing a draft executive order that would direct the US Department of Justice to target state-level laws in what one consumer advocate called a "blatant and disgusting circumvention of our democracy"—one entirely meant to do the bidding of tech giants.
The executive order would direct Attorney General Pam Bondi to create an AI Litigation Task Force to target laws that have already been passed in both red and blue states and to stop state legislators from passing dozens of bills that have been introduced, including ones to protect people from companion chatbots, require studies on the impact of AI on employment, and bar landlords from using AI algorithms to set rent prices.
The draft order takes aim at California's new AI safety laws, calling them "complex and burdensome" and claiming they are based on "purely speculative suspicion" that AI could harm users.
“States like Alabama, California, New York and many more have passed laws to protect kids from harms of Big Tech AI like chatbots and AI generated [child sexual abuse material]. Trump’s proposal to strip away these critical protections, which have no federal equivalent, threatens to create a taxpayer-funded death panel that will determine whether kids live or die when they decide what state laws will actually apply. This level of moral bankruptcy proves that Trump is just taking orders from Big Tech CEOs,” said Sacha Haworth, executive director of the Tech Oversight Project.
The task force would operate on the administration's argument that the federal government alone is authorized to regulate commerce between states.
Shakeel Hashim, editor of the newsletter Transformer, pointed out that that claim has been pushed aggressively in recent months by venture capital firm Andreessen Horowitz.
President Donald Trump "and his team seem to have taken that idea and run with it," said Hashim. "It looks a lot like the tech industry dictating government policy—ironic, given that Trump rails against 'regulatory capture' in the draft order."
The DOJ panel would consult with Trump and White House AI Special Adviser David Sacks—an investor and cofounder of an AI company—on which state laws should be challenged.
The executive order would also authorize Commerce Secretary Howard Lutnick to publish a review of "onerous" state AI laws and restrict federal broadband funds to states found to have laws the White House disagrees with. It would further direct the Federal Communications Commission to adopt a new federal AI law that would preempt state laws.
The draft executive order was reported days after Trump called on House Republicans to include a ban on state-level AI regulations in the must-pass National Defense Authorization Act, which House Majority Leader Steve Scalise (R-La.) indicated the party would try to do.
The multipronged effort to stop states from regulating the technology, including AI chatbots that have already been linked to the suicides of children, comes months after an amendment to the One Big Beautiful Bill Act was resoundingly rejected in the Senate, 99-1.
Travis Hall, director for state engagement at the Center for Democracy and Technology, suggested that legal challenges would be filed swiftly if Trump moves forward with the executive order.
"The president cannot preempt state laws through an executive order, full stop," Hall told NBC News. "Preemption is a question for Congress, which they have considered and rejected, and should continue to reject."
David Dayen, executive editor of The American Prospect, said harm the draft order could pose becomes clear "once you ask one simple question: What is an AI law?"
The draft doesn't specify, but Dayen posited that a range of statutes could apply: "Is that just something that has to do with [large language models]? Is it anything involving a business that uses an algorithm? Machine learning?"
"You can bet that every company will try to get it to apply to their industry, and do whatever corrupt transactions with Trump to ensure it," he continued. "So this is a roadmap to preempt the vast majority of state laws on business and commerce more generally, everything from consumer protection to worker rights, in the name of preventing 'obstruction' of AI. This should be challenged immediately upon signing."
The draft order was reported amid speculation among tech industry analysts that the AI "bubble" is likely about to burst, with investors dumping their shares in AI chip manufacturer Nvidia and an MIT report finding that 95% of generative AI pilot programs are not presenting a return on investment for companies. Executives at tech giant OpenAI recently suggested the government should provide companies with a "guarantee" for developing AI infrastrusture—which was widely interpreted as a plea for a bailout.
At Public Citizen, copresident Robert Weissman took aim at the White House for its claim that AI does not pose risks to consumers, noting AI technologies are already "undermining the emotional well-being of young people and adults and, in some cases, contributing to suicide; exacerbating racial disparities at workplaces; wrongfully denying patients healthcare; driving up electric bills and increasing greenhouse gas emissions; displacing jobs; and undermining society’s basic concept of truth."
Furthermore, he said, the president's draft order proves that "for all his posturing against Big Tech, Donald Trump is nothing but the industry’s well-paid waterboy."
"Big Tech companies have spent the past year cozying up to Trump—doing everything from paying for his garish White House ballroom to adopting content moderation policies of his liking—and this is their reward," said Weissman. "It’s a fabulous return on a very modest investment—at the expense of all Americans.”
JB Branch, the group's Big Tech accountability advocate, added that instead of respecting the Senate's bipartisan rejection of the earlier attempt to stop states from regulating AI, "industry lobbyists are now running to the White House."
"AI scams are exploding, children have died by suicide linked to harmful online systems, and psychologists are warning about AI-induced breakdowns, but President Trump is choosing to protect his tech oligarch friends over the safety of middle-class Americans," said Branch. "The administration should stop trying to shield Silicon Valley from responsibility and start listening to the overwhelming bipartisan consensus that stronger, not weaker, safeguards are needed.”
“While Donald Trump keeps selling away influence over our government, we’re fighting to ensure the rules are being written to help working Americans, not corporate interests," said Sen. Elizabeth Warren.
Two progressive Democrats are teaming up to push legislation to curb corporate America's capture of the federal government's regulatory process.
Rep. Pramila Jayapal (D-Wash.) and Sen. Elizabeth Warren (D-Mass.) on Wednesday announced a new bill called the Experts Protect Effective Rules, Transparency, and Stability (EXPERTS) Act that aims to restore the role of subject matter experts in federal rulemaking.
Specifically, the bill would codify the Chevron doctrine, a 40-year legal precedent overturned last year by the US Supreme Court, which held that courts should be broadly deferential to decisions made by independent regulatory agencies about interpretations of congressional statutes.
The legislation would also push for more transparency by requiring the disclosure of funding sources for all "scientific, economic, and technical studies" that are submitted to agencies to influence the rulemaking process.
Additionally, the bill proposes speeding up the regulatory process by both "excluding private parties from using the negotiated rulemaking process" and reinstating a six-year limit for outside parties to file legal challenges to agencies' decisions.
In touting the legislation, the Democrats pitched it as a necessary tool to rein in corporate power.
“Many Americans are taught in civics classes that Congress passes a law and that’s it, but the reality is that any major legislation enacted must also be implemented and enforced by the executive branch to become a reality,” said Jayapal. “We are seeing the Trump administration dismantle systems created to ensure that federal regulation prioritizes public safety. At a time when corporations and CEOs have outsized power, it is critical that we ensure that public interest is protected. This bill will level the playing field to ensure that laws passed actually work for the American people."
Warren, meanwhile, argued that "giant corporations and their armies of lobbyists shouldn’t get to manipulate how our laws are implemented," and said that "while Donald Trump keeps selling away influence over our government, we’re fighting to ensure the rules are being written to help working Americans, not corporate interests."
The proposal earned an enthusiastic endorsement from Public Citizen co-president Lisa Gilbert, who described it as "the marquee legislation to improve our regulatory system."
"The bill aims directly at the corporate capture of our rulemaking process, brings transparency to the regulatory review process and imposes a $250,000 fine on corporations that submit false information, among other things," she said. "The bill is essential law for the future of our health, safety, environment, and workers. Public Citizen urges swift passage in both chambers."