SUBSCRIBE TO OUR FREE NEWSLETTER

SUBSCRIBE TO OUR FREE NEWSLETTER

Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

* indicates required
5
#000000
#FFFFFF
Why Facebook Filtering Will Ultimately Fail

"After devoting nearly two decades to growth at any cost," writes Karr, the Facebook "network has become far too big NOT to fail--even when you consider the resources Facebook is now throwing at filtering and flagging objectionable content. And those failures are having dangerous ramifications in the real world." (Photo: flickr/GostGo/cc)

Why Facebook Filtering Will Ultimately Fail

We as a society need to do more than hope that Facebook can fix itself.

In its content-moderation report released this week, Facebook revealed that it had removed a whopping 3.2-billion fake accounts from March through September 2019.

That's a lot of disinformation, wrote tech reporter Aditya Srivastava: "To put it in perspective, the number is [nearly] half of living human beings on planet earth."

Facebook also claims to have removed or labeled 54-million pieces of content flagged as too violent and graphic, 18.5-million items deemed sexual exploitation, 11.4-million posts breaking its anti-hate speech rules, and 5.7 million that violated its bullying and harassment policies.

We need to consider these large-seeming numbers in the context of the social-media giant's astronomical growth. Facebook, Inc. now hosts so much content that it's hard to imagine any filtering apparatus that's capable of sorting it all out.

"Our challenge is to look beyond fixing Facebook to building a new model for social media that doesn't violate human and civil rights, or undermine democratic norms."

Facebook CEO Mark Zuckerberg admitted to Congress last month that more than 100-billion pieces of content are shared via Facebook entities (including Instagram and WhatsApp) each day.

Bear with me because the math gets interesting.

A hundred-billion pieces of content a day amounts to roughly 18 trillion (with a "t") pieces of content over the six-month period covered in Facebook's moderation report.

Or to put it in Srivastava's terms, that's comparable to having every living individual on Earth posting content to Facebook platforms 13 times each and every day.

That's where we are today.

Content armies

To filter this tsunami of images and text in search of suspicious posts, Facebook has deployed an army of more than 30,000 content moderators, and is aided in this effort by artificial intelligence designed to flag content that may violate its rules.

It reminds me of my experiences working as a journalist in Hanoi during the pre-internet early 90s.

My minder from Vietnam's Ministry of Culture and Information told me that the government employed more than 10,000 people skilled in foreign languages to monitor all outgoing and incoming phone calls and faxes, which is how we communicated back then.

Even in an era still dominated by analog communications, the ministry's many listeners struggled to keep up.

And while I received the occasional knock on the door from ministry apparatchik concerned about a recent phone conversation I had had or a recent article I had published, the vast majority of my work--including stories the government would have frowned on--went through to my editors in Hong Kong and Tokyo without detection.

Fast forward to 2019.

Just last month, a fairly senior Facebook employee told me that the company is shooting for a 99-percent global success rate in flagging content that violates its rules regarding hateful or racist activity.

But it still has a way to go. And even a 99-percent filter may not be good enough.

Too big NOT to fail

In its most recent report, the social-media giant claimed to have reached an 80-percent "proactive rate"--a metric by which the company measures its ability to flag hateful content before its community of users has done so.

What does this mean? It means a lot of things. One, that Facebook seems to be taking some serious efforts to combat hateful activities and disinformation coursing across its network.

But it also means that despite these efforts, the flood of content remains far too overwhelming to manage; that even at a very high proactive rate, millions of posts that violate the company's rules would still slip through.

And its average success rate is likely a lot lower in countries where Facebook lacks language and cultural expertise (or some would argue, economic interest) to effectively flag harmful content.

A report released by Avaaz at the end of October seems to confirm this. Avaaz found that Facebook removed only 96 of 213 posts it had flagged for attacking India's Bengali Muslim community using similar online tactics as those used against Rohingya minorities in Myanmar.

One post came from an elected official calling "Bangladeshi Muslims" those who rape "our mothers and sisters." Others included calls to "poison" their daughters and legalize female feticide.

So, again, what does this mean?

It means that even when operating at optimal levels Facebook's content-moderation schemes will still miss posts that incite people to inflict real offline violence to some of the most vulnerable among us. And its algorithms will still promote content that provokes strongly partisan and divisive reactions among users.

It also means that we as a society need to do more than hope that Facebook can fix itself. After devoting nearly two decades to growth at any cost, the company's network has become far too big NOT to fail--even when you consider the resources Facebook is now throwing at filtering and flagging objectionable content. And those failures are having dangerous ramifications in the real world.

Our challenge is to look beyond fixing Facebook to building a new model for social media that doesn't violate human and civil rights, or undermine democratic norms.

There's only so much Facebook can do to fix the problem at its core. The solution lies elsewhere, with those of us willing to create a better platform from scratch.

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.