SUBSCRIBE TO OUR FREE NEWSLETTER

SUBSCRIBE TO OUR FREE NEWSLETTER

Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

* indicates required
5
#000000
#FFFFFF
Federal Study on Racial Biases in Facial Recognition Technology Confirms Warnings of Civil Liberties Groups

A display shows a facial recognition system for law enforcement during the NVIDIA GPU Technology Conference, which showcases artificial intelligence, deep learning, virtual reality, and autonomous machines, in Washington, D.C., November 1, 2017. (Photo: Saul Loeb/AFP via Getty Images)

Federal Study on Racial Biases in Facial Recognition Technology Confirms Warnings of Civil Liberties Groups

African American and Asian American men were misidentified 100 times as often as white men.

The U.S. government's first major federal study of facial recognition surveillance, released Thursday, shows the technology's extreme racial and gender biases, confirming what privacy and civil rights groups have warned about for years.

In a study of 189 algorithms used by law enforcement agencies to match facial recognition images with names in state and federal databases, the National Institute of Standards and Technology (NIST) found that Asian American and African American men were misidentified 100 times as often as white men.

The algorithms disproportionately favored white middle-aged men overall. Compared with young people, the elderly, and women of all ages, middle-aged white males were identified accurately most frequently, while Native American people were most frequently misidentified.

Such misidentifications can lead to false arrests as well as inability to secure employment, housing, or credit, the MIT Media Lab said in a study it conducted in 2018.

"Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future," tweeted Brianna Wu, a U.S. House candidate in Massachusetts. "Tech is a new, terrifying frontier for civil rights."

The NIST study echoed MIT Media Lab's results in their study entitled "Gender Shades," in which researchers found that algorithms developed by three different companies most often misidentified women of color.

NIST's report is "a sobering reminder that facial recognition technology has consequential technical limitations alongside posing threats to civil rights and liberties," Joy Buolamwini, lead author the Gender Shades report, told the Washington Post.

Digital rights group Fight for the Future wrote on social media that the study demonstrated "why dozens of groups and tens of thousands of people are calling on Congress to ban facial recognition."

Fight for the Future launched its #BanFacialRecognition campaign in July, calling on local, state, and federal governments to ban the use of the technology by law enforcement and other public agencies--instead of just regulating its use.

"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale." --Jay Stanley, ACLU"This surveillance technology poses such a profound threat to the future of human society and basic liberty that its dangers far outweigh any potential benefits," Fight for the Future said when it launched the campaign.

This week lawmakers in Alameda, Calif. became the latest local officials to vote for a ban.

Despite warnings from Fight for the Future and other groups including the ACLU, which sued the federal government in October over its use of the technology, the FBI has run nearly 400,000 searches of local and federal databases using facial recognition since 2011.

The algorithms studied by NIST were developed by companies including Microsoft, Intel, and Panasonic. Amazon, which developed facial recognition software called Rekognition, did not provide its algorithm for the study.

"Amazon is deeply cowardly when it comes to getting their facial recognition algorithm audited," tweeted Cathy O'Neil, an algorithm auditor.

Jay Stanley, a senior policy analyst at the ACLU, told the Post that inaccuracies in algorithms are "only one concern" that civil liberties groups have about the surveillance programs that the federal government is now studying.

"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale," Stanley said.

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.