Jan 08, 2022
January 6, 2021 will be remembered as one of the darkest days for democracy in modern U.S. history, and the attack's anniversary coincides with the kickoff of an election year. Some lawmakers responded to January 6 by attempting to reduce online free speech by modifying Section 230. However, this misguided approach would fail to address radicalization and hate online, undermine human rights, and further solidify Big Tech's domination of the internet. In this election year, lawmakers must stop the true threat to democracy: mass surveillance.
January 6 is just the most recent example of platforms identifying people who may be vulnerable to radicalization.
In 2017, data surpassed oil as the world's most valuable commodity. Unsurprising, given that companies like Facebook and YouTube's demonstrations that data can be weaponized to manipulate behavior. Facebook, Instagram, and YouTube are dangerous because their business model is designed to maximize profit by whatever means necessary. Massive amounts of personal data inform the algorithms that decide what shows up in news feeds. In order to maximize engagement, the algorithms amplify the type of content users interact with, often leading them to increasingly extreme content. January 6 is just the most recent example of platforms identifying people who may be vulnerable to radicalization, then targeting them in a way that dramatically increases their exposure to violent and hateful content.
Data privacy is an election integrity issue because manipulation like Facebook's is powered by personal data collected via mass surveillance. Digital disinformation is very different from print, television, and radio disinformation because mass surveillance allows propagandists to customize content based on inferences about personality, biases, and fears, then target those who are most susceptible. If policymakers don't learn from January 6, personal data-powered election interference will become the norm. To secure our elections, Congress and regulatory agencies must cut off the fuel supply for Facebook's manipulation machine by implementing a federal data privacy law.
For nearly a decade, Facebook has been researching the effects of content manipulation on emotion. In 2012, it experimented on 689,000+ people, investigating whether news feed content could influence emotions. This unethical experimentation happened without users' knowledge or consent. In 2013, researchers showed Facebook 'likes' alone predict race with 95% accuracy and political party with 85% accuracy. Further studies used Facebook data to profile users' personalities, with accuracy comparable to ratings by spouses.
U.K.-based political data firm SCL Group saw an opportunity, then received $15 million from Republican megadonor Robert Mercer in 2014. Using this funding, SCL Group established U.S. subsidiary Cambridge Analytica with the explicit goal of influencing the 2016 election. Mercer appointed future White House adviser Steve Bannon as a director of Cambridge Analytica. The company helped guide the Trump campaign's deployment of 100,000+ unique political ads targeting highly specific groups of voters based on inferences about personality, beliefs, and fears. In addition to manipulating potential voters, they used microtargeting in voter suppression efforts. The Trump campaign spent up to $70 million per month on Facebook ads.
Headlines describe Cambridge Analytica as a data breach, but Facebook disagrees: "People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked." Facebook did make alterations, and was aware before 2016 that Cambridge Analytica had abused its platform--but ultimately they did not ensure the data was deleted and continued to allow election ad buys from those using it. Facebook even had employees embedded in Trump's digital ad campaign. By the time it disbanded in 2018, Cambridge Analytica had been involved in over 200 international election interference efforts in 68 countries. Former employees have founded spin-offs including Auspex, IDEIA Big Data, HuMn Behavior, and Data Propria.
Facebook took lessons from Cambridge Analytica--namely, how to sow hate itself. Internal documents in the Facebook Papers detail how Facebook treated "angry" emoji reactions as up to five times more valuable than "likes." This continued even after its own research concluded content that elicits hate and anger already receives disproportionate engagement. Further internal studies demonstrated that people are exposed to conspiratorial content almost immediately after interacting with mainstream conservative media because Facebook prioritizes engagement at all costs.
In 2020, Facebook only banned political ads after Election Day, with election policies expiring immediately after polls closed. This, combined with a policy to not fact-check political ads (despite heavily policing user posts), had disastrous consequences for the integrity of the 2020 election. Facebook's decisions, not about what speech to allow but about the actual design of their product, contributed to voter suppression, election disinformation, and proliferation of "Stop the Steal" conspiracy theories. These theories are directly responsible for the violence that occurred at the Capitol on January 6. Facebook failed to act as the attack was openly organized and promoted on the platform.
Facebook's business model has evolved into social engineering via psychological warfare, and it's crucial that lawmakers address this existential threat.
Meanwhile, Facebook's internal civil rights audit concluded the platform systematically fails to address hate, violence, and radicalization. Content about racism and racial justice, particularly when it is posted by Black people, is frequently censored due to "violating policies against hate speech," while far-right extremism thrives. When presented with suggested changes, Facebook refused to change, citing fears of backlash from "conservative partners."
Facebook recently announced it would limit ad targeting based on political and cause affiliations, but it's too little, too late. The networks are already mapped, the data of 87 million users still exists, and rebranded bad actors are all gearing up for the midterms. Facebook's advertising tools can now "nanotarget" an individual user, suggesting that we have entered a new era of personalized propaganda.
Facebook whistleblower Frances Haugen's congressional testimony shined a light on issues that human rights activists have been warning about for years. Prior to her testimony, most lawmakers sought to address the harms of social media by clamping down on free speech by modifying Section 230. Then, Haugen showed that algorithmic manipulation based on personal data is the actual fuel for the dangerous behavior Facebook encourages.
We cannot restructure society's relationship with social media or have safe and fair elections without ending the exploitation and manipulation that currently underpins our digital lives. Facebook's business model has evolved into social engineering via psychological warfare, and it's crucial that lawmakers address this existential threat. In order to protect the integrity of the future elections, lawmakers must disarm Facebook's data weapon by regulating surveillance, not speech.
Correction: This piece was initially published the wrong byline. That error has been corrected.
Join Us: News for people demanding a better world
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
January 6, 2021 will be remembered as one of the darkest days for democracy in modern U.S. history, and the attack's anniversary coincides with the kickoff of an election year. Some lawmakers responded to January 6 by attempting to reduce online free speech by modifying Section 230. However, this misguided approach would fail to address radicalization and hate online, undermine human rights, and further solidify Big Tech's domination of the internet. In this election year, lawmakers must stop the true threat to democracy: mass surveillance.
January 6 is just the most recent example of platforms identifying people who may be vulnerable to radicalization.
In 2017, data surpassed oil as the world's most valuable commodity. Unsurprising, given that companies like Facebook and YouTube's demonstrations that data can be weaponized to manipulate behavior. Facebook, Instagram, and YouTube are dangerous because their business model is designed to maximize profit by whatever means necessary. Massive amounts of personal data inform the algorithms that decide what shows up in news feeds. In order to maximize engagement, the algorithms amplify the type of content users interact with, often leading them to increasingly extreme content. January 6 is just the most recent example of platforms identifying people who may be vulnerable to radicalization, then targeting them in a way that dramatically increases their exposure to violent and hateful content.
Data privacy is an election integrity issue because manipulation like Facebook's is powered by personal data collected via mass surveillance. Digital disinformation is very different from print, television, and radio disinformation because mass surveillance allows propagandists to customize content based on inferences about personality, biases, and fears, then target those who are most susceptible. If policymakers don't learn from January 6, personal data-powered election interference will become the norm. To secure our elections, Congress and regulatory agencies must cut off the fuel supply for Facebook's manipulation machine by implementing a federal data privacy law.
For nearly a decade, Facebook has been researching the effects of content manipulation on emotion. In 2012, it experimented on 689,000+ people, investigating whether news feed content could influence emotions. This unethical experimentation happened without users' knowledge or consent. In 2013, researchers showed Facebook 'likes' alone predict race with 95% accuracy and political party with 85% accuracy. Further studies used Facebook data to profile users' personalities, with accuracy comparable to ratings by spouses.
U.K.-based political data firm SCL Group saw an opportunity, then received $15 million from Republican megadonor Robert Mercer in 2014. Using this funding, SCL Group established U.S. subsidiary Cambridge Analytica with the explicit goal of influencing the 2016 election. Mercer appointed future White House adviser Steve Bannon as a director of Cambridge Analytica. The company helped guide the Trump campaign's deployment of 100,000+ unique political ads targeting highly specific groups of voters based on inferences about personality, beliefs, and fears. In addition to manipulating potential voters, they used microtargeting in voter suppression efforts. The Trump campaign spent up to $70 million per month on Facebook ads.
Headlines describe Cambridge Analytica as a data breach, but Facebook disagrees: "People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked." Facebook did make alterations, and was aware before 2016 that Cambridge Analytica had abused its platform--but ultimately they did not ensure the data was deleted and continued to allow election ad buys from those using it. Facebook even had employees embedded in Trump's digital ad campaign. By the time it disbanded in 2018, Cambridge Analytica had been involved in over 200 international election interference efforts in 68 countries. Former employees have founded spin-offs including Auspex, IDEIA Big Data, HuMn Behavior, and Data Propria.
Facebook took lessons from Cambridge Analytica--namely, how to sow hate itself. Internal documents in the Facebook Papers detail how Facebook treated "angry" emoji reactions as up to five times more valuable than "likes." This continued even after its own research concluded content that elicits hate and anger already receives disproportionate engagement. Further internal studies demonstrated that people are exposed to conspiratorial content almost immediately after interacting with mainstream conservative media because Facebook prioritizes engagement at all costs.
In 2020, Facebook only banned political ads after Election Day, with election policies expiring immediately after polls closed. This, combined with a policy to not fact-check political ads (despite heavily policing user posts), had disastrous consequences for the integrity of the 2020 election. Facebook's decisions, not about what speech to allow but about the actual design of their product, contributed to voter suppression, election disinformation, and proliferation of "Stop the Steal" conspiracy theories. These theories are directly responsible for the violence that occurred at the Capitol on January 6. Facebook failed to act as the attack was openly organized and promoted on the platform.
Facebook's business model has evolved into social engineering via psychological warfare, and it's crucial that lawmakers address this existential threat.
Meanwhile, Facebook's internal civil rights audit concluded the platform systematically fails to address hate, violence, and radicalization. Content about racism and racial justice, particularly when it is posted by Black people, is frequently censored due to "violating policies against hate speech," while far-right extremism thrives. When presented with suggested changes, Facebook refused to change, citing fears of backlash from "conservative partners."
Facebook recently announced it would limit ad targeting based on political and cause affiliations, but it's too little, too late. The networks are already mapped, the data of 87 million users still exists, and rebranded bad actors are all gearing up for the midterms. Facebook's advertising tools can now "nanotarget" an individual user, suggesting that we have entered a new era of personalized propaganda.
Facebook whistleblower Frances Haugen's congressional testimony shined a light on issues that human rights activists have been warning about for years. Prior to her testimony, most lawmakers sought to address the harms of social media by clamping down on free speech by modifying Section 230. Then, Haugen showed that algorithmic manipulation based on personal data is the actual fuel for the dangerous behavior Facebook encourages.
We cannot restructure society's relationship with social media or have safe and fair elections without ending the exploitation and manipulation that currently underpins our digital lives. Facebook's business model has evolved into social engineering via psychological warfare, and it's crucial that lawmakers address this existential threat. In order to protect the integrity of the future elections, lawmakers must disarm Facebook's data weapon by regulating surveillance, not speech.
Correction: This piece was initially published the wrong byline. That error has been corrected.
January 6, 2021 will be remembered as one of the darkest days for democracy in modern U.S. history, and the attack's anniversary coincides with the kickoff of an election year. Some lawmakers responded to January 6 by attempting to reduce online free speech by modifying Section 230. However, this misguided approach would fail to address radicalization and hate online, undermine human rights, and further solidify Big Tech's domination of the internet. In this election year, lawmakers must stop the true threat to democracy: mass surveillance.
January 6 is just the most recent example of platforms identifying people who may be vulnerable to radicalization.
In 2017, data surpassed oil as the world's most valuable commodity. Unsurprising, given that companies like Facebook and YouTube's demonstrations that data can be weaponized to manipulate behavior. Facebook, Instagram, and YouTube are dangerous because their business model is designed to maximize profit by whatever means necessary. Massive amounts of personal data inform the algorithms that decide what shows up in news feeds. In order to maximize engagement, the algorithms amplify the type of content users interact with, often leading them to increasingly extreme content. January 6 is just the most recent example of platforms identifying people who may be vulnerable to radicalization, then targeting them in a way that dramatically increases their exposure to violent and hateful content.
Data privacy is an election integrity issue because manipulation like Facebook's is powered by personal data collected via mass surveillance. Digital disinformation is very different from print, television, and radio disinformation because mass surveillance allows propagandists to customize content based on inferences about personality, biases, and fears, then target those who are most susceptible. If policymakers don't learn from January 6, personal data-powered election interference will become the norm. To secure our elections, Congress and regulatory agencies must cut off the fuel supply for Facebook's manipulation machine by implementing a federal data privacy law.
For nearly a decade, Facebook has been researching the effects of content manipulation on emotion. In 2012, it experimented on 689,000+ people, investigating whether news feed content could influence emotions. This unethical experimentation happened without users' knowledge or consent. In 2013, researchers showed Facebook 'likes' alone predict race with 95% accuracy and political party with 85% accuracy. Further studies used Facebook data to profile users' personalities, with accuracy comparable to ratings by spouses.
U.K.-based political data firm SCL Group saw an opportunity, then received $15 million from Republican megadonor Robert Mercer in 2014. Using this funding, SCL Group established U.S. subsidiary Cambridge Analytica with the explicit goal of influencing the 2016 election. Mercer appointed future White House adviser Steve Bannon as a director of Cambridge Analytica. The company helped guide the Trump campaign's deployment of 100,000+ unique political ads targeting highly specific groups of voters based on inferences about personality, beliefs, and fears. In addition to manipulating potential voters, they used microtargeting in voter suppression efforts. The Trump campaign spent up to $70 million per month on Facebook ads.
Headlines describe Cambridge Analytica as a data breach, but Facebook disagrees: "People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked." Facebook did make alterations, and was aware before 2016 that Cambridge Analytica had abused its platform--but ultimately they did not ensure the data was deleted and continued to allow election ad buys from those using it. Facebook even had employees embedded in Trump's digital ad campaign. By the time it disbanded in 2018, Cambridge Analytica had been involved in over 200 international election interference efforts in 68 countries. Former employees have founded spin-offs including Auspex, IDEIA Big Data, HuMn Behavior, and Data Propria.
Facebook took lessons from Cambridge Analytica--namely, how to sow hate itself. Internal documents in the Facebook Papers detail how Facebook treated "angry" emoji reactions as up to five times more valuable than "likes." This continued even after its own research concluded content that elicits hate and anger already receives disproportionate engagement. Further internal studies demonstrated that people are exposed to conspiratorial content almost immediately after interacting with mainstream conservative media because Facebook prioritizes engagement at all costs.
In 2020, Facebook only banned political ads after Election Day, with election policies expiring immediately after polls closed. This, combined with a policy to not fact-check political ads (despite heavily policing user posts), had disastrous consequences for the integrity of the 2020 election. Facebook's decisions, not about what speech to allow but about the actual design of their product, contributed to voter suppression, election disinformation, and proliferation of "Stop the Steal" conspiracy theories. These theories are directly responsible for the violence that occurred at the Capitol on January 6. Facebook failed to act as the attack was openly organized and promoted on the platform.
Facebook's business model has evolved into social engineering via psychological warfare, and it's crucial that lawmakers address this existential threat.
Meanwhile, Facebook's internal civil rights audit concluded the platform systematically fails to address hate, violence, and radicalization. Content about racism and racial justice, particularly when it is posted by Black people, is frequently censored due to "violating policies against hate speech," while far-right extremism thrives. When presented with suggested changes, Facebook refused to change, citing fears of backlash from "conservative partners."
Facebook recently announced it would limit ad targeting based on political and cause affiliations, but it's too little, too late. The networks are already mapped, the data of 87 million users still exists, and rebranded bad actors are all gearing up for the midterms. Facebook's advertising tools can now "nanotarget" an individual user, suggesting that we have entered a new era of personalized propaganda.
Facebook whistleblower Frances Haugen's congressional testimony shined a light on issues that human rights activists have been warning about for years. Prior to her testimony, most lawmakers sought to address the harms of social media by clamping down on free speech by modifying Section 230. Then, Haugen showed that algorithmic manipulation based on personal data is the actual fuel for the dangerous behavior Facebook encourages.
We cannot restructure society's relationship with social media or have safe and fair elections without ending the exploitation and manipulation that currently underpins our digital lives. Facebook's business model has evolved into social engineering via psychological warfare, and it's crucial that lawmakers address this existential threat. In order to protect the integrity of the future elections, lawmakers must disarm Facebook's data weapon by regulating surveillance, not speech.
Correction: This piece was initially published the wrong byline. That error has been corrected.
We've had enough. The 1% own and operate the corporate media. They are doing everything they can to defend the status quo, squash dissent and protect the wealthy and the powerful. The Common Dreams media model is different. We cover the news that matters to the 99%. Our mission? To inform. To inspire. To ignite change for the common good. How? Nonprofit. Independent. Reader-supported. Free to read. Free to republish. Free to share. With no advertising. No paywalls. No selling of your data. Thousands of small donations fund our newsroom and allow us to continue publishing. Can you chip in? We can't do it without you. Thank you.