SUBSCRIBE TO OUR FREE NEWSLETTER

SUBSCRIBE TO OUR FREE NEWSLETTER

Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

* indicates required
5
#000000
#FFFFFF
Woman works on laptop

Rights advocates warn weakening Section 230 would disproportionately silence and endanger "marginalized communities including LGBTQ+ people, Black and Brown folks, sex workers, journalists, and human rights activists around the world." (Photo: MoMo Productions/Getty Images)

Groups Warn SCOTUS May Gut 'Foundational' Digital Rights Law

"Weakening Section 230 would be catastrophic—disproportionately silencing and endangering marginalized communities," said one campaigner.

Digital rights advocates responded with alarm to the U.S. Supreme Court's Monday decision to take up a case that could enable right-wing justices to gut Section 230 of the Communications Decency Act.

"It benefits not just tech companies large and small, but the hundreds of millions of people who use their services daily."

"Section 230 is a foundational and widely misunderstood law that protects human rights and free expression online," said Fight for the Future director Evan Greer in a statement late Monday.

"At a time when civil rights and civil liberties are under unprecedented attack," Greer warned, "weakening Section 230 would be catastrophic--disproportionately silencing and endangering marginalized communities including LGBTQ+ people, Black and Brown folks, sex workers, journalists, and human rights activists around the world."

Dubbed "the most important law protecting internet speech" by the Electronic Frontier Foundation, Section 230 shields platforms from liability for content published by others. For example, Meta may not be liable for the content of a Facebook user's post, and a blog or news site may not be liable for what a reader writes in an article's comment section.

"Section 230 is a foundational and necessary law. It benefits not just tech companies large and small, but the hundreds of millions of people who use their services daily," Free Press vice president of policy and general counsel Matt Wood said Tuesday. "It's hard to overstate the complexity and importance of cases like the one the Supreme Court has agreed to hear."

After declining to hear a Section 230 case two years ago, the nation's highest court--which now is dominated by six right-wing justices and has a historically low approval rating--agreed Monday to take up Gonzalez v. Google.

As SCOTUSblogdetailed:

The question now before the court is whether Section 230 protects internet platforms when their algorithms target users and recommend someone else's content. The case was filed by the family of an American woman killed in a Paris bistro in an ISIS attack in 2015. They brought their lawsuit under the Antiterrorism Act, arguing that Google (which owns YouTube) aided ISIS's recruitment through YouTube videos--specifically, recommending ISIS videos to users through its algorithms.

A divided panel of the U.S. Court of Appeals for the 9th Circuit ruled that Section 230 protects such recommendations, at least if the provider's algorithm treated content on its website similarly. The majority acknowledged that Section 230 "shelters more activity than Congress envisioned it would." However, the majority concluded, Congress--rather than the courts--should clarify how broadly Section 230 applies.

The justices also agreed to take up a petition for review filed by Twitter, in a lawsuit filed against it by the family of a Jordanian citizen killed in an ISIS attack on a nightclub in Istanbul. In the same opinion as its ruling in Gonzalez, the 9th Circuit held that Twitter, Facebook, and Google could be held liable, regardless of Section 230, for aiding and abetting international terrorism by allowing ISIS to use their platforms.

Wood said that "Section 230 balances the understandable desire and need for accountability for heinous acts like the deadly attacks perpetrated in these cases against the need to protect free expression, open dialogue, and all manner of beneficial activities on the modern internet."

"Section 230 lowers barriers to people sharing their own content online without the pre-clearance platforms would demand if they could be liable for everything those people say and do," he explained. "This law protects platforms from being sued as publishers of other parties' information, yet it also permits and encourages these companies to make content moderation decisions while retaining that protection from liability."

Thus, the law in question "encourages the open exchange of ideas, but also takedowns of hateful and harmful material," Wood emphasized. "Without those paired protections, we'd risk losing moderation and removal of the very same kinds of videos at issue in this case."

"We'd risk chilling online expression too, since not all plaintiffs suing to remove ideas they don't like would be proceeding in good faith as the victims' families here clearly did," he added. "Both risks are especially significant for Black and Brown communities, LGBTQIA+ people, immigrants, religious minorities, dissidents, and all people and ideas targeted for suppression or harassment by powerful forces."

Wood continued:

Section 230 allows injured parties to hold platforms liable for those platforms' own conduct, as distinct from the content they merely host and distribute for users. But some courts have interpreted the law more broadly and prevented any such test of platforms' liability for their own actions. Free Press believes platforms could and often should be liable when they knowingly amplify and monetize harmful content by continuing to distribute it even after they're on notice of the actionable harms traced to those distribution decisions.

Yet using this case to effectively repeal or drastically alter Section 230 would be a terrible idea. Platforms that filter, amplify, or make any content recommendations at all should not automatically be subject to suit for all content they allow to remain up. That kind of on/off switch for the liability protections in the law would encourage two bad results: either forcing platforms to leave harmful materials untouched and free to circulate, or requiring them to take down far more user-generated political and social commentary than they already do.

Greer delivered a similar warning about content moderation while also pointing out that "by increasing the risk of litigation for small- and medium-sized platforms, altering Section 230 would solidify the monopoly power of the largest companies like Facebook and Google."

Noting the myth that the law has been used to "censor" right-wing views, Greer stressed that "weakening Section 230 protections wouldn't prevent social media companies from removing posts based on political views, just like it wouldn't incentivize platforms to moderate more thoughtfully, transparently, or responsibly. It would only incentivize them to moderate in whatever manner their lawyers tell them will avoid lawsuits, even if that means trampling on marginalized people's ability to express themselves online."

Greer also warned the right-wing justices' recent reversal of Roe v. Wade "makes the prospect of Section 230 being weakened even more nightmarish," given anti-choice efforts to limit online information about abortion care and advocacy. She said that the legal immunity provided by the law "is the only thing preventing far-right groups and the attorneys general of states like Texas and Mississippi from effectively writing the speech rules for the entire internet."

Both the Supreme Court and Congress "should leave Section 230 alone," she argued. "Lawmakers should focus their efforts on enacting privacy legislation strong enough to effectively end the surveillance-driven business model of harmful tech giants" while federal regulators "crack down on corporate data harvesting and use of personal data to power harmful and discriminatory algorithms."

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.