SUBSCRIBE TO OUR FREE NEWSLETTER

SUBSCRIBE TO OUR FREE NEWSLETTER

Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

* indicates required
5
#000000
#FFFFFF
Should President Biden Revoke Section 230?

FacebookGoogleTwitter use robot algorithm curators that are on automatic pilot, much like killer drones for which no human bears responsibility or liability. That's dangerous in a democracy. (Photo Illustration by Jakub Porzycki/NurPhoto via Getty Images)

Should President Biden Revoke Section 230?

The recent attack on the US Capitol reveals the danger of digital media platforms. Here’s what we must do to transform Silicon Valley’s corruption of the media infrastructure.

The beautiful dream of an open and free internet, serving as a global agora of unlimited free speech to provide for more democratic participation, has crashed and burned one more time. The mob attack on the US Capitol was incited and planned over Facebook, Twitter, YouTube and other digital media platforms, with a tragic nudge from the president of the United States. The gripping images of a ransacking mob, gunshots and Congress members cowering on the floor of the House of Representatives is a warning to us all.

How did we arrive here?

Since the birth of the Big Tech media platforms 15 years ago--let's drop the friendly-sounding misnomer of "social" media--democracies around the world have been subjected to a grand experiment: can a nation's news and information infrastructure, the lifeblood of any democracy, be dependent on digital technologies that allow a global free speech zone of unlimited audience size, combined with algorithmic (non-human) curation of massive volumes of mis/disinformation, that can be spread with unprecedented ease and reach?

The evidence has become frighteningly clear that this experiment has veered off course, like a Frankenstein monster marauding across the landscape.

The FacebookGoogleTwitter media giants have been mis-used frequently by bad political operatives for disinformation campaigns in over 70 countries to undermine elections.

Facebook is no longer simply a "social networking" website--it is the largest media giant in the history of the world, a combination publisher and broadcaster with approximately 2.6 billion regular users, plus billions more on the Facebook-owned WhatsApp and Instagram. A mere 100 pieces of COVID-19 misinformation on Facebook were shared 1.7 million times and had 117 million views--far more daily viewers than the New York Times, Washington Post, Wall Street Journal, Fox News, ABC and CNN combined.

The FacebookGoogleTwitter media giants have been mis-used frequently by bad political operatives for disinformation campaigns in over 70 countries to undermine elections, even helping elect a quasi-dictator in the Philippines; and to widely livestream child abusers, pornographers and the Christchurch mass murderer of Muslims in New Zealand. How can we unite to take action on climate change when a majority of YouTube climate change videos denies the science, and 70% of what YouTube's two billion users watch comes from its sensation-saturated recommendation algorithms?

Traditional media are subject to certain laws and regulations, including a degree of liability over what they push into the world. While there is much to criticize about mainstream media and corporate broadcasters, at least they use humans to curate the news, and pick and choose what's in and out of the newstream. That results in a degree of accountability, including potentially libel lawsuits and other forms of Madisonian-like checks and balances.

But with Big Tech media, it's more like the wild wild West, with no sheriff. FacebookGoogleTwitter use robot algorithm curators that are on automatic pilot, much like killer drones for which no human bears responsibility or liability. That's dangerous in a democracy.

So non-human curation, when combined with unlimited audience size and frictionless amplification, has completely failed as a foundation for a democracy's media infrastructure. It's time to hit reset in a major way, not only to save our democracy, but also to provide the best chance to redesign these digital media technologies so that we retain the promise and decrease the dangers.

To Section 230 or to not Secton 230, that is the question

After a lot of soul-searching, and sifting through arguments pro and con by leaders and organizations such as the libertarian Electronic Frontier Foundation (con) and long time Biden ally Bruce Reed (pro), I have concluded that President Joe Biden should make good on one of his campaign promises by asking Congress to revoke Section 230 of the Communications Decency Act. That's the law from 1996 that grants Big Tech media blanket immunity from the mass content it publishes. While revoking Section 230 is not a perfect solution, it would make the companies more responsible, deliberative and potentially liable for the worst of the toxic content, including illegal content, that is algorithmically-promoted by their platforms. Just like traditional media are already liable.

But let's be clear: some of the most reckless content would likely not be impacted by 230's revocation. For example, Donald Trump's posts on Twitter and Facebook claiming the presidential election was stolen, and his inflammatory speech that YouTube broadcast the morning of the Capitol attack to millions, were false and provocative -- but it would be difficult to legally prove that any individuals or institutions were harmed or incited directly by the president's many outrageous statements. After all, any number of traditional media outlets also have published untrue nonsense without the protections of Section 230, yet they were never held liable. Much content and speech -- even undesirable speech -- is already protected by the First Amendment.

But FacebookGoogleTwitter's "engagement algorithms" recommend and amplify sensationalized crazytown user content for one reason - to maximize profits by increasing users' screen time and exposure to more ads. In fact, the Wall Street Journal reported that Facebook executives scaled back a successful effort to make the site less divisive when they found that it was decreasing their audience share. So this is more like commercial speech, which is entitled to less protection. All their fake pretensions aside about an "open and free internet," their primary business strategy has resulted in the dividing, distracting and outraging of people to the point where society is now plagued by a fractured basis for shared truth, sensemaking and political consensus.

And yet they still refuse to de-weaponize their platforms. Ejecting Donald Trump from their services did nothing to change their destructive business model, it just hid the most visible evidence of it. It was a self-serving act that should fool no one.

A better business model--investor-owned utilities

So revoking Section 230 will likely not be as impactful as its proponents wish, or as its critics fear. What else needs to be done?

To answer this, we have to recognize that these businesses are creating the new public infrastructure of the digital age, including search engines, global portals for news and networking, web-based movies, music and live streaming, GPS-based navigation apps, online commercial marketplaces, digital labor market platforms. They tell us that they are providing all of this for free, that all we have to do is give them access to our private data. But that has turned out to be a very high price indeed, as the Capitol riots showed.

So the federal government should advance the regulatory incentives for a whole new business model: treating many of these companies more like investor-owned utilities. Historically, that has been the approach used by the government in other industries, such as telephone, railroad and power generation. Ironically, even Mark Zuckerberg himself has suggested such an approach.

As utilities, they would be guided by a digital license - just like traditional brick-and-mortar companies must apply for various permits -- that defines the rules and regulations of the new business model. This license should require platforms to obtain users' permission before collecting anyone's personal data--i.e., opt-in rather than opt-out. These companies never asked for permission to start sucking up our private data or to track our physical locations, or to mass collect every "like," "share" and "follow" into psychographic profiles that are used by advertisers and political operatives to target users. The platforms started these "data grabs" secretly, forging their destructive brand of "surveillance capitalism."

Today, these giant platforms know what you like, think, watch, where you go, which church, restaurants and clubs you frequent - they know you better than your spouse or therapist. Should society continue to allow this data grabbing? It seems clear that the dangers of this spying outweigh any alleged benefits, such as hyper-targeted advertising that supposedly caters to our individual desires.

The utility business model also should encourage competition by limiting the mega-scale audience size of these digital media machines; nearly 250 million Americans, about 80 percent of the population, have a profile on one of these platforms. A number of organizations have called for an anti-monopoly break up of these companies, like AT&T once was split into the Baby Bells. That intervention has merits, but let's be clear: if Facebook is forced to spin off WhatsApp and its two billion users, and nothing else about the business model changes, that will just result in another Big Tech media behemoth. More competition is good, but less so if they are competing according to the same market rules that the companies themselves have decided.

So another way to reduce the user pools would be through incentives to shift to a revenue model based more on monthly paying subscribers, like how Netflix and cable TV work, rather than on hyper-targeted advertising. That also would likely result in a decline in users.

A utility model also should incorporate other relevant frameworks, such as a fiduciary "duty of care" obligation, a kind of moral and legal Hippocratic oath to "first, do no harm." British authorities have been trying to erect the foundations of this approach, and the US could partner with them.

Another relevant framework is a product liability model. Imagine the danger if a manufacturer of a pandemic vaccination, or a medical device, could start injecting people, or open up patient's chests and insert their latest artificial organ, without having their products tested and certified before widespread use. Nuclear power plants, voting equipment vendors and many other systemically-important businesses follow such a protocol.

The application of these frameworks implies restraints on the platforms' use of specific "engagement" techniques that both research and live experience have shown to be contributing to social isolation, teen depression and suicide, as well as damaging our democracy. These techniques include hyper-targeting of content and advertisements, automated recommendations and addictive behavioral nudges (like pop-up screens, autoplay and infinite scroll) that facilitate manipulation.

The US also should update existing laws to ensure they are applied to the online world. Google's YouTube/YouTubeKids has been violating the Children's Television Act--which restricts violence and advertising on TV--for many years, resulting in online lawlessness that the Federal Communications Commission should halt. Similarly, the Federal Elections Commission should rein in the quasi-lawless world of online political ads and donor reporting, which has far fewer rules and less transparency than ads in TV and radio broadcasting.

Big Tech media's frequent outrages against our humanity are supposedly the price we must pay for being able to post our summer vacation and new puppy pics to our "friends," or for political dissidents and whistleblowers to alert the world to their just causes. Those are all important uses, but the price being paid is very high. We can do better.

The challenge is to establish sensible guardrails for this 21st century digital infrastructure, so that we can harness the positives and greatly mitigate the dangers. America has done this in the past with new technologies and infrastructure, so we should proceed with confidence that we can get this right.

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.