SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
The U.S. government's first major federal study of facial recognition surveillance, released Thursday, shows the technology's extreme racial and gender biases, confirming what privacy and civil rights groups have warned about for years.
In a study of 189 algorithms used by law enforcement agencies to match facial recognition images with names in state and federal databases, the National Institute of Standards and Technology (NIST) found that Asian American and African American men were misidentified 100 times as often as white men.
The algorithms disproportionately favored white middle-aged men overall. Compared with young people, the elderly, and women of all ages, middle-aged white males were identified accurately most frequently, while Native American people were most frequently misidentified.
Such misidentifications can lead to false arrests as well as inability to secure employment, housing, or credit, the MIT Media Lab said in a study it conducted in 2018.
"Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future," tweeted Brianna Wu, a U.S. House candidate in Massachusetts. "Tech is a new, terrifying frontier for civil rights."
\u201cFact: Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future. \n\nFacial recognition also has these biases that disadvantage PoC. Tech is a new, terrifying frontier for civil rights. \n\n https://t.co/BwHNJa1saO\u201d— Brianna Wu (@Brianna Wu) 1576791332
The NIST study echoed MIT Media Lab's results in their study entitled "Gender Shades," in which researchers found that algorithms developed by three different companies most often misidentified women of color.
NIST's report is "a sobering reminder that facial recognition technology has consequential technical limitations alongside posing threats to civil rights and liberties," Joy Buolamwini, lead author the Gender Shades report, told the Washington Post.
Digital rights group Fight for the Future wrote on social media that the study demonstrated "why dozens of groups and tens of thousands of people are calling on Congress to ban facial recognition."
\u201cBREAKING: Landmark federal study confirms that current #facialrecognition systems exhibit significant racial bias. This is why dozens of groups and tens of thousands of people are calling on Congress to #BanFacialRecognition now https://t.co/LYJkozDxn4\u201d— @team@fightforthefuture.org on Mastodon (@@team@fightforthefuture.org on Mastodon) 1576786519
Fight for the Future launched its #BanFacialRecognition campaign in July, calling on local, state, and federal governments to ban the use of the technology by law enforcement and other public agencies--instead of just regulating its use.
"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale." --Jay Stanley, ACLU"This surveillance technology poses such a profound threat to the future of human society and basic liberty that its dangers far outweigh any potential benefits," Fight for the Future said when it launched the campaign.
This week lawmakers in Alameda, Calif. became the latest local officials to vote for a ban.
Despite warnings from Fight for the Future and other groups including the ACLU, which sued the federal government in October over its use of the technology, the FBI has run nearly 400,000 searches of local and federal databases using facial recognition since 2011.
The algorithms studied by NIST were developed by companies including Microsoft, Intel, and Panasonic. Amazon, which developed facial recognition software called Rekognition, did not provide its algorithm for the study.
"Amazon is deeply cowardly when it comes to getting their facial recognition algorithm audited," tweeted Cathy O'Neil, an algorithm auditor.
Jay Stanley, a senior policy analyst at the ACLU, told the Post that inaccuracies in algorithms are "only one concern" that civil liberties groups have about the surveillance programs that the federal government is now studying.
"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale," Stanley said.
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
The U.S. government's first major federal study of facial recognition surveillance, released Thursday, shows the technology's extreme racial and gender biases, confirming what privacy and civil rights groups have warned about for years.
In a study of 189 algorithms used by law enforcement agencies to match facial recognition images with names in state and federal databases, the National Institute of Standards and Technology (NIST) found that Asian American and African American men were misidentified 100 times as often as white men.
The algorithms disproportionately favored white middle-aged men overall. Compared with young people, the elderly, and women of all ages, middle-aged white males were identified accurately most frequently, while Native American people were most frequently misidentified.
Such misidentifications can lead to false arrests as well as inability to secure employment, housing, or credit, the MIT Media Lab said in a study it conducted in 2018.
"Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future," tweeted Brianna Wu, a U.S. House candidate in Massachusetts. "Tech is a new, terrifying frontier for civil rights."
\u201cFact: Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future. \n\nFacial recognition also has these biases that disadvantage PoC. Tech is a new, terrifying frontier for civil rights. \n\n https://t.co/BwHNJa1saO\u201d— Brianna Wu (@Brianna Wu) 1576791332
The NIST study echoed MIT Media Lab's results in their study entitled "Gender Shades," in which researchers found that algorithms developed by three different companies most often misidentified women of color.
NIST's report is "a sobering reminder that facial recognition technology has consequential technical limitations alongside posing threats to civil rights and liberties," Joy Buolamwini, lead author the Gender Shades report, told the Washington Post.
Digital rights group Fight for the Future wrote on social media that the study demonstrated "why dozens of groups and tens of thousands of people are calling on Congress to ban facial recognition."
\u201cBREAKING: Landmark federal study confirms that current #facialrecognition systems exhibit significant racial bias. This is why dozens of groups and tens of thousands of people are calling on Congress to #BanFacialRecognition now https://t.co/LYJkozDxn4\u201d— @team@fightforthefuture.org on Mastodon (@@team@fightforthefuture.org on Mastodon) 1576786519
Fight for the Future launched its #BanFacialRecognition campaign in July, calling on local, state, and federal governments to ban the use of the technology by law enforcement and other public agencies--instead of just regulating its use.
"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale." --Jay Stanley, ACLU"This surveillance technology poses such a profound threat to the future of human society and basic liberty that its dangers far outweigh any potential benefits," Fight for the Future said when it launched the campaign.
This week lawmakers in Alameda, Calif. became the latest local officials to vote for a ban.
Despite warnings from Fight for the Future and other groups including the ACLU, which sued the federal government in October over its use of the technology, the FBI has run nearly 400,000 searches of local and federal databases using facial recognition since 2011.
The algorithms studied by NIST were developed by companies including Microsoft, Intel, and Panasonic. Amazon, which developed facial recognition software called Rekognition, did not provide its algorithm for the study.
"Amazon is deeply cowardly when it comes to getting their facial recognition algorithm audited," tweeted Cathy O'Neil, an algorithm auditor.
Jay Stanley, a senior policy analyst at the ACLU, told the Post that inaccuracies in algorithms are "only one concern" that civil liberties groups have about the surveillance programs that the federal government is now studying.
"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale," Stanley said.
The U.S. government's first major federal study of facial recognition surveillance, released Thursday, shows the technology's extreme racial and gender biases, confirming what privacy and civil rights groups have warned about for years.
In a study of 189 algorithms used by law enforcement agencies to match facial recognition images with names in state and federal databases, the National Institute of Standards and Technology (NIST) found that Asian American and African American men were misidentified 100 times as often as white men.
The algorithms disproportionately favored white middle-aged men overall. Compared with young people, the elderly, and women of all ages, middle-aged white males were identified accurately most frequently, while Native American people were most frequently misidentified.
Such misidentifications can lead to false arrests as well as inability to secure employment, housing, or credit, the MIT Media Lab said in a study it conducted in 2018.
"Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future," tweeted Brianna Wu, a U.S. House candidate in Massachusetts. "Tech is a new, terrifying frontier for civil rights."
\u201cFact: Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future. \n\nFacial recognition also has these biases that disadvantage PoC. Tech is a new, terrifying frontier for civil rights. \n\n https://t.co/BwHNJa1saO\u201d— Brianna Wu (@Brianna Wu) 1576791332
The NIST study echoed MIT Media Lab's results in their study entitled "Gender Shades," in which researchers found that algorithms developed by three different companies most often misidentified women of color.
NIST's report is "a sobering reminder that facial recognition technology has consequential technical limitations alongside posing threats to civil rights and liberties," Joy Buolamwini, lead author the Gender Shades report, told the Washington Post.
Digital rights group Fight for the Future wrote on social media that the study demonstrated "why dozens of groups and tens of thousands of people are calling on Congress to ban facial recognition."
\u201cBREAKING: Landmark federal study confirms that current #facialrecognition systems exhibit significant racial bias. This is why dozens of groups and tens of thousands of people are calling on Congress to #BanFacialRecognition now https://t.co/LYJkozDxn4\u201d— @team@fightforthefuture.org on Mastodon (@@team@fightforthefuture.org on Mastodon) 1576786519
Fight for the Future launched its #BanFacialRecognition campaign in July, calling on local, state, and federal governments to ban the use of the technology by law enforcement and other public agencies--instead of just regulating its use.
"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale." --Jay Stanley, ACLU"This surveillance technology poses such a profound threat to the future of human society and basic liberty that its dangers far outweigh any potential benefits," Fight for the Future said when it launched the campaign.
This week lawmakers in Alameda, Calif. became the latest local officials to vote for a ban.
Despite warnings from Fight for the Future and other groups including the ACLU, which sued the federal government in October over its use of the technology, the FBI has run nearly 400,000 searches of local and federal databases using facial recognition since 2011.
The algorithms studied by NIST were developed by companies including Microsoft, Intel, and Panasonic. Amazon, which developed facial recognition software called Rekognition, did not provide its algorithm for the study.
"Amazon is deeply cowardly when it comes to getting their facial recognition algorithm audited," tweeted Cathy O'Neil, an algorithm auditor.
Jay Stanley, a senior policy analyst at the ACLU, told the Post that inaccuracies in algorithms are "only one concern" that civil liberties groups have about the surveillance programs that the federal government is now studying.
"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale," Stanley said.