facebook

In this photo illustration, a smart phone screen displays a new policy on Covid-19 misinformation with a Facebook website in the background, on May 27, 2021, in Arlington, Virginia. (Photo: Andrew Caballero-Reynolds/AFP via Getty Images)

Misinformation's Deadly Profit Motive

Big Tech companies continually put profits and growth over the safety of their users. We cannot wait for more people to die before we take action.

Are social-media companies killing people?

Last week, President Biden said they were. But on Monday, he clarified his remarks, saying it's misinformation that's the real threat.

The user-engagement model driving businesses like Facebook and YouTube makes it easy for deadly misinformation to spread at a speed and scale never before possible.

Actually, it's the combination of the two that's costing lives. Biden's comments followed a U.S. surgeon general advisory, which found that the user-engagement model driving businesses like Facebook and YouTube makes it easy for deadly misinformation to spread at a speed and scale never before possible. These online platforms have designed their products in a way that encourages users to share false content--causing people to reject public-health initiatives against COVID-19, attack public-health workers, and embrace dangerous "miracle cures."

"When it comes to misinformation, not sharing is caring," Surgeon General Vivek Murthy said during a White House press briefing last week. His advisory offers a detailed account of the ways that the spread of health mis- and disinformation has flooded communities with lies.

Health misinformation was deadly prior to the rise of internet platforms, but the problem is proliferating in new ways because of the technology these companies use to extract and exploit our demographic and behavioral data.

This data extraction fosters the precision-targeting of people with ads, content and group recommendations, including the more than 43 million people in the United States whose first language is Spanish. According to multiple reports, bad-faith actors and the misinformed have taken to Spanish-language YouTube programs, WhatsApp communities and Facebook groups to spread fake cures and conspiracy theories about a range of health threats. While these companies have done a poor job of tackling this problem to protect English-speaking users, they've done far worse when addressing the spread of misinformation in languages spoken by the country's many immigrant communities.

Dr. Murthy urged platforms to "step up" and do more, including operating with greater transparency, monitoring misinformation more closely and taking action against the multitude of disinformation super-spreaders who've found a lucrative home on their services.

It's an unprecedented statement coming from the nation's top doctor, but platforms like Facebook, Twitter and YouTube will do only so much without regulations that hold them accountable to public health and welfare. Like Big Tobacco before them, Big Tech companies continually put profits and growth over the safety of their users. We cannot wait for more people to die before we take action.

And while the many public-interest campaigns pressuring the platforms to do better are important, the government has a critical role to play in reining in the companies' toxic revenue models.

Free Press Action has outlined three measures lawmakers and the White House must take to respond to the dangerous spread of platform mis- and disinformation. These steps--more than any changes technology companies have voluntarily made--are an essential way to begin safeguarding the future of our information ecosystem and the health of its users.

First, Congress must pass the Algorithmic Justice and Online Platform Transparency Act, introduced earlier this year by Sen. Edward Markey (D-Massachusetts) and Rep. Doris Matsui (D-California). This crucial legislation would disrupt the way social-media platforms use personal data and discriminatory algorithms to target users and increase company profits at all costs. It would force more transparency on the ways tech companies use engagement algorithms, prevent the discriminatory use of personal information, and establish an interagency task force to set public-safety-and-effectiveness standards for algorithms.

Second, we need to better support trustworthy local news and information as an antidote to the spread of online lies. A recent study from UNC's Center for Media Law and Policy found that mis- and disinformation spread on social media often fills the vacuum left when communities lose trusted sources of local-news coverage. Free Press Action has proposed a tax on the advertising revenues that power Big Tech to fund the kinds of diverse, local and independent journalism that's gone missing as local newsrooms have shut down and journalists have lost their jobs. A portion of these ad-tax revenues should go to supporting non-English news and information in communities often neglected or misinformed by mainstream outlets.

Third, the White House needs to appoint an interagency official to coordinate study and action on tech companies' harmful data practices, including the ways their business models scale up the spread of deadly health misinformation. This work includes encouraging the Federal Trade Commission to begin a rulemaking on harmful data and algorithmic practices, which would examine the algorithm-based business model that promotes toxic lies and relies on engagement-for-profit over public accountability. Focusing attention on the ways platforms monitor and remove multilingual misinformation should be a priority.

When the surgeon general's office first sounded the alarm about tobacco in 1964, Congress followed with measures to begin to repair the extensive harms the industry had inflicted on generations of first- and secondhand smokers. Now that Dr. Murthy has issued a similar health warning against platform misinformation, the government must do more than just slap a warning label on Big Tech's products. Lawmakers must protect people from a business model that makes deadly misinformation profitable.

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.