SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
A small-government intervention will clean up the public market and force Threads—and Meta—to build a better, safer sewing machine.
As a kid, I worked in a men’s store tailor shop on the East Side of Cleveland. It was chaos, watching master tailors cut, sew, and press tiny threads into modern fashion. My job was to clean the shop, oil the machines, and keep the steam presses hydrated. Thread was everywhere and constantly needed to be swept up, as each garment was crafted with care and purpose.
Whether Meta founder Mark Zuckerberg realized it or not, the name of his new text-based social media platform, Threads, is the perfect metaphor for the new platform we’ve all been craving. Will it be sewn into something beautiful or just another tangled mess that needs to be swept up?
Elon Musk’s decisions at the helm of Twitter and the longstanding issues surrounding the lack of controls against bullies and bots have disgusted millions of users. But is jumping ship to a new platform—owned by a flawed company that has not cleaned up its own issues—the way we want to engage?
After my first day on Threads, I already faced issues that have plagued Twitter for years. I had fake profiles and bots already following my account.
Social media fashions have changed from when we first logged on over a decade ago. We are no longer excited by chaos, stunts, or gimmicks, or learning basic HTML to customize our backgrounds on MySpace. Many of us just want an uncluttered, simple social platform that’s bully and bot-free, and isn’t trying to sell us stuff we don’t want or need. Adam Mosseri, the head of Instagram, knows this, and was quoted in The New York Times saying he wants “Threads to be a ‘friendly place’ for public conversation.”
But is that even possible, given that Threads has seemingly already fallen short on protections? After my first day on Threads, I already faced issues that have plagued Twitter—a blatantly similar type of platform—for years. I had fake profiles and bots already following my account.
If Threads wants to succeed, it needs a bobbin to keep it running smoothly. Think of it as adding some simple guardrails to help guide the threads from jamming the machine. Without this basic intervention, we already know the downward spiral that’s coming next.
We have watched social networks, including Meta, fight to keep and expand archaic protections that were granted in 1996’s Communications Decency Act. These protections were created to allow companies like AOL and Prodigy to be treated as blind infrastructure, like a telephone line, and never be held liable for any communications on their railways.
These laws were created before there were modern-day social networks, let alone billions of dollars in advertising revenue being moved through them.
Unfortunately, as each of these platforms competes to become the largest network in the free market, without any intervention or protections, they will create more of the same bot-driven cesspools, spreading misinformation and disinformation and promoting false advertising. There is no real incentive for them to do anything different in the United States. Threads is not yet in the European Union, since the E.U. has stricter privacy laws. It also has yet to implement advertising, but that’s just a matter of time.
Now is the time to evolve the Communications Decency Act so that the next generation of social networks are sewn into a more wearable garment. This is not unAmerican. Think back to that famous Thomas Jefferson quote, “We might as well require a man to wear still the coat which fitted him when a boy as a civilized society to remain ever under the regimen of their barbarous ancestors." Let’s follow this lead and advance our social platforms by evolving Section 230 of the 1996 Communications Decency Act and force these powerful companies to take accountability for their actions.
Historically, Twitter only took performative actions to resolve or remove bots and fake accounts before they testified before Congress or before a major election. The company was well known for putting out self-congratulatory press releases on how it clamped down and removed tons of bots and bad actors—but let’s be honest, they never implemented long-term fixes to these known problems.
A simple change in liability, the bobbin, will ensure social networks run smoother by forcing them to focus on their consumers. This simple change will make these companies spend resources on security measures, monitoring technology, and even hiring staff to review advertising for accuracy, just like every other media outlet in America.
In other words, a small-government intervention will clean up the public market and force Threads—and Meta—to build a better, safer sewing machine. One that does not allow its users to be threatened by hate speech or acts of violence without real consequences.
It’s time for Congress to take out their brooms, evolve the Communications Decency Act, and help clean up these threads.
"Digital platforms are being misused to subvert science and spread disinformation and hate to billions of people," António Guterres warned.
United Nations Secretary-General António Guterres on Monday implored governments around the world to take concerted action to curb the rapid online spread of destructive misinformation, disinformation, and hate speech.
"Alarm bells over the latest form of artificial intelligence—generative AI—are deafening," said Guterres. "They are loudest from the developers who designed it. These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war. We must take those warnings seriously."
"But the advent of generative AI must not distract us from the damage digital technology is already doing to our world," Guterres continued. "The proliferation of hate and lies in the digital space is causing grave global harm—now. It is fueling conflict, death, and destruction—now. It is threatening democracy and human rights—now. It is undermining public health and climate action—now."
"The proliferation of hate and lies in the digital space is causing grave global harm—now."
"When social media emerged a generation ago, digital platforms were embraced as exciting new ways to connect," noted the U.N. chief. "But today, this same technology is often a source of fear, not hope. Digital platforms are being misused to subvert science and spread disinformation and hate to billions of people."
"This clear and present global threat demands clear and coordinated global action," he added.
Guterres delivered his speech at an event marking the publication of a new policy brief that will inform a U.N. Code of Conduct for Information Integrity on Digital Platforms, which is currently being developed ahead of next year's Summit of the Future.
In his introduction to the document, Guterres wrote that he hopes the U.N.'s recommendations will "provide a gold standard for guiding action to strengthen information integrity on digital platforms," including social media sites, search engines, and messaging apps.
\u201cAlarm over generative AI, as relevant as it is, must not obscure damage being done by digital tech enabling the spread of hate speech, mis- & disinformation now.\n\nFueling conflict & destruction.\n\nThreatening democracy & human rights.\n\nUndermining public health & #ClimateAction.\u201d— Ant\u00f3nio Guterres (@Ant\u00f3nio Guterres) 1686590891
The brief includes proposals "aimed at creating guardrails to help governments come together around guidelines that promote facts while exposing conspiracies and lies and safeguarding freedom of expression and information," said Guterres. It also seeks "to help tech companies navigate difficult ethical and legal issues and build business models based on a healthy information ecosystem."
Around the world, responses to misinformation, disinformation, and hate speech have so far been lacking, Guterres noted.
"Governments have sometimes resorted to drastic measures—including blanket internet shutdowns and bans—that lack any legal basis and infringe on human rights," the U.N. chief observed. Meanwhile, "some tech companies have done far too little, too late to prevent their platforms from contributing to violence and hatred."
The brief, part of the U.N.'s emerging framework for a joint international effort to tackle online disinformation, provides a roadmap "to make the digital space safer and more inclusive while vigorously protecting human rights," said Guterres.
As Guterres explained, the document begins to outline principles the U.N. hopes will be implemented "voluntarily." They include:
In addition, "the brief proposes that tech companies should undertake to move away from damaging business models that prioritize engagement above human rights, privacy, and safety," said Guterres. "It suggests that advertisers—who are deeply implicated in monetizing and spreading damaging content—should take responsibility for the impact of their spending."
"It recognizes the need for a fundamental shift in incentive structures," he added. "Disinformation and hate should not generate maximum exposure and massive profits."
According toThe Associated Press, "Heidi Beirich, co-founder of the Global Project Against Hate and Extremism, agreed that while it's a positive step that the U.N. is calling for international solutions to this global problem, its code of conduct won't likely be sufficient to stop the torrent of false and hateful information online."
"The fact of the matter is that voluntary codes, including the companies' own terms of service on these issues, have failed to rein them in," Beirich told the news outlet. "The problem for the U.N. is they can't do what it seems is going to have to be done to deal with this problem, which is basically legislation."
The brief, which the U.N. sees as a blueprint for lawmakers, notes that "even as we seek solutions to protect information integrity in the current landscape, we must ensure that recommendations are future-proof, addressing emerging technologies and those yet to come."
To that end, Guterres stressed the need for "urgent and immediate measures to ensure that all AI applications are safe, secure, responsible, and ethical, and comply with human rights obligations."
As Al Jazeera reported, "Guterres has announced plans to start work by the end of the year on a high-level AI advisory body to regularly review AI governance arrangements and offer recommendations on how they can align with human rights, the rule of law, and [the] common good."
On Monday, the U.N. chief said he is open "to the idea that we could have an artificial intelligence agency" akin to the International Atomic Energy Agency. However, he added, "only member states can create it, not the Secretariat of the United Nations."
Why the digital public square needs public antitrust solutions.
Exactly one month after Elon Musk announced his acquisition of Twitter with a lame pun borrowed from Tumblr and Reddit, footage of the Christchurch mosque massacres resurfaced on the platform. It was yet another example of how Twitter backslid into chaos and hate under Musk. It also reopened wounds that I and other Muslims endured because Twitter and other platforms allowed this hate to spread in the first place. That is why I met the news that Twitter abruptly dissolved the Trust and Safety Council with a mix of sadness and relief.
Muslim Advocates was a member of the council, a non-binding advisory board for policy review and recommendations from worldwide experts. This mostly meant we got previews of much-hyped new features from Twitter that tinkered around the edges of the broader problem of online hate. Fundamentally, the council was a space of corporate pseudo-accountability. They offered us “access” and we were supposed to be content with our seat at the illusory table.
Engaging with tech platforms like Twitter about online hate has been an exercise in gaslighting: you complain about hateful content, the platform tells you the problem doesn’t exist and then you’re left wondering if it’s because they secretly agree with the hate or because the perpetrator is politically powerful—or both.
This inadequate process came to a screeching halt when Musk took charge. Almost all of our points of contact at Twitter disappeared, council meetings were canceled and we had to learn what was going on from hourly headlines about Musk.
All of this is to say that while the pre-Musk status quo was already harming Muslims, immigrants, LGBTQ+ and BIPOC communities with its negligence on hate speech, post-Musk Twitter was somehow worse. Musk barreled in touting an incoherent free speech absolutist ideology that turned Twitter into a haven for grifters gleefully using hate speech to target marginalized communities. As “chief twit,” Musk spends much of his time responding to people who espouse white nationalist propaganda, tweet out white nationalist content and spread anti-LGBTQ conspiracy theories. He also reinstated the accounts of anti-Muslim conspiracy theorists and even notorious neo-Nazi Andrew Anglin. Meanwhile, Musk’s Twitter banned the accounts of left-wing activists and others who merely offended him—apparently also in the name of free speech?
The weekend after Thanksgiving, when Twitter’s automated defenses failed and allowed footage of the Christchurch massacre to recirculate, I was transported back to 2019. In the immediate aftermath of the shootings, my colleagues at Muslim Advocates had to simultaneously process their own human responses to the atrocity while rooting out footage of the attacks to flag for removal by the social media companies—essentially doing content moderation work that the platforms should have been doing themselves in the futile hope that our suffering wouldn’t be celebrated by bigots and
inspire future copycats.
With or without Musk, Twitter and all social media wring profits from the targeting of our communities. Musk’s main change has been to stop even trying to mitigate these harms, and to actively validate the hatemongers he agrees with. I am sad that Musk’s dissolution of the Trust and Safety Council ends one admittedly half-hearted attempt to push back against hate. However, I am also relieved because though it was already exhausting to argue for my community’s humanity with Twitter, it feels inconceivable to do so with a class bully like Musk.
One thing I’ve learned from my years fighting online hate is that while Musk is deeply problematic, he is not the problem. Twitter is not the problem. Social media is not the problem. The problem is that megalomaniacal billionaires can take over our public squares, our support systems and our livelihoods on a whim and warp them to feed their endless hunger for power and adoration. Meanwhile, we exhaust ourselves trying to get them to value our lives more than their profits.
We all deserve more than to be the casualties of an egotistical, cringe-inducing billionaire’s mid-life crisis. We deserve more than an uneven playing field that consistently profits off of hate, yet purports to be a public square. We deserve to have the power to protect ourselves and the only way to do that is to take it from billionaires like Musk.
We’ve tried engaging with these social media billionaires and that has failed. To keep them in check, we need new solutions like antitrust law, which exists so that no one entity can exert unfettered control over entire aspects of our lives. Right now, we have a rare opportunity where lawmakers in both political parties at least claim to be open to antitrust solutions. We must seize it to finally empower and protect communities that have been victimized for far too long.