SUBSCRIBE TO OUR FREE NEWSLETTER

SUBSCRIBE TO OUR FREE NEWSLETTER

Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

* indicates required
5
#000000
#FFFFFF
AI EYE

"Programs aimed at 'situational awareness,'" warns Patel, " like those run by many parts of DHS or police departments preparing for public events, tend to have few safeguards."

(Photo: iStock via Getty Images)

What Happens When AI and Big Brother Get Together to Spy on You?

Artificial intelligence could supercharge threats to civil liberties, civil rights, and privacy.

Your friends aren’t the only ones seeing your tweets on social media. The F.B.I and the Department of Homeland Security (DHS), as well as police departments around the country, are reviewing and analyzing people’s online activity. These programs are only likely to grow as generative artificial intelligence (AI) promises to re-make our online world with better, faster, and more accurate analyses of data, as well as the ability to generate humanlike text, video, and audio.

While social media can help law enforcement investigate crimes, many of these monitoring efforts reach far more broadly even before bringing AI into the mix. Programs aimed at “situational awareness,” like those run by many parts of DHS or police departments preparing for public events, tend to have few safeguards. They often veer into monitoring social and political movements, particularly those involving minority communities. For instance, DHS’s National Operations Center issued multiple bulletins on the 2020 racial justice protests. The Boston Police Department tracked posts by Black Lives Matter protesters and labeled online speech related to Muslim religious and cultural practices as “extremist” without any evidence of violence or terrorism. Nor does law enforcement limit itself to scanning public posts. The Memphis police, for example, created a fake Facebook profile to befriend and gather information from Black Lives Matter activists.

The pervasiveness — and problems — of social media surveillance are almost certain to be exacerbated by new AI tools...

Internal government assessments cast serious doubt on the usefulness of broad social media monitoring. In 2021, after extensive reports of the department’s overreach in monitoring racial justice protestors, the DHS General Counsel’s office reviewed the activities of agents collecting social media and other open-source information to try to identify emerging threats. It found that agents gathered material on “a broad range of general threats,” ultimately yielding “information of limited value.” The Biden administration ordered a review of the Trump-era policy requiring nearly all visa applicants to submit their social media handles to the State Department, affecting some 15 million people annually, to help in immigration vetting — a practice that the Brennan Center has sought to challenge. While the review’s results have not been made public, intelligence officials charged with conducting it concluded that collecting social media handles added “no value” to the screening process. This is consistent with earlier findings. According to a 2016 brief prepared by the Department of Homeland Security for the incoming administration, in similar programs to vet refugees, account information “did not yield clear, articulable links to national security concerns, even for those applicants who were found to pose a potential national security threat based on other security screening results.” The following year, the DHS Inspector General released an audit of these programs, finding that the department had not measured their effectiveness and rendered them an insufficient basis for future initiatives. Despite failing to prove that monitoring programs actually bolster national security, the government continues to collect, use, and retain social media data.

The pervasiveness — and problems — of social media surveillance are almost certain to be exacerbated by new AI tools, including generative models, which agencies are racing to adopt.

Generative AI will enable law enforcement to more easily use covert accounts. In the physical world, undercover informants have long raised issues, especially when they have been used to trawl communities rather than target specific criminal activities. Online undercover accounts are far easier and cheaper to create and can be used to trick people into interacting and inadvertently sharing personal information such as the name of their friends and associations. New AI tools could generate fake accounts with a sufficient range of interests and connections to look real and autonomously interact with people online, saving officer time and effort. This will supercharge the problem of effortless surveillance, which the Supreme Court has recognized may “alter the relationship between citizen and government in a way that is inimical to democratic society.” These concerns are compounded by the fact that few police departments impose restrictions on undercover account use, with many allowing officers to monitor people online without a clear rationale, documentation or supervision. The same is true for federal agencies such as DHS.

Currently, despite the hype generated by their purveyors, social media surveillance tools seem to operate on a relatively rudimentary basis. While the companies that sell them tend to be secretive about how they work, the Brennan Center’s research suggest serious shortcomings. Some popular tools do not use scientific methods for identifying relevant datasets, much less test them for bias. They often use key words and phrases to identify potential threats, which blurs the context necessary to understand whether something is in fact a threat and not, for example, someone discussing a video game. It is possible that large language models, such as ChatGPT, will advance this capability — or at least be perceived and sold as doing so — and incentivize greater use of these tools.

At the same time, any such improvements may be offset by the fact that AI is widely expected to further pollute the unreliable information environment, exacerbating problems of provenance and reliability. Social media is already suffused with inaccurate and misleading information. According to a 2018 MIT study, false political news is 70 percent more likely to be re-tweeted than truthful content on X (formerly Twitter). Bots and fake accounts — which can already mimic human behavior — are also a challenge; during the COVID-19 pandemic, bots were found to proliferate misinformation about the disease, and could just as easily spread fake information generated by AI, deceiving platform users. Generative AI makes creating false news and fake identities easier, negatively contributing to an already-polluted online information environment. Moreover, AI has a tendency to “hallucinate,” or make up information — a seemingly unfixable problem that is ubiquitous among generative AI systems.

Generative AI also exacerbates longstanding problems. The promise of better analysis does nothing to ease First Amendment issues raised by social media monitoring. Bias in algorithmic tools has long been a concern, ranging from predictive policing programs that treat Black people as suspect to content moderation practices disfavoring Muslim speech. For example, Instagram users recently found that the label terrorist was addedto their English bios if their Arabic bios included the word “Palestinian,” the Palestinian flag emoji, and the common Arabic phrase “praise be to god.”

The need to address these risks is front and center in President Biden’s AI executive order and a draft memorandum from the Office of Management and Budget that sets out standards for federal agency use of AI. The OMB memo identifies social media monitoring as a use of AI that impacts individuals’ rights, and thus requires agencies using this technology to follow critical rules for transparency, testing efficacy and mitigating bias and other risks. Unfortunately, these sensible rules do not apply to national security and intelligence uses and do not affect police departments. But they should.

© 2023 Brennan Center for Justice