SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
Privacy advocates on Monday urged lawmakers to ban facial recognition in schools in response to a new study finding that use of the technology in educational settings would likely lead to a number of negative consequences including the normalization of surveillance and worsening of racial biases.
"Using facial recognition in schools amounts to unethical experimentation on children," said Evan Greer, deputy director of Fight for the Future.
The digital rights group has been vocal in its opposition to facial recognition, or FR, and last year launched the BanFacialRecognition.com website along with dozens of other groups.
Greer, in her statement, said that the moves being made during the covoravirus crisis by companies that sell the technology are simply adding more urgency to the demand for a ban.
"We're already seeing surveillance vendors attempt to exploit the Covid-19 pandemic to push for the use of this ineffective, invasive, and blatantly racist technology," she said. "It's time to draw a line in the sand right now."
"Lawmakers should act quickly to ban facial recognition in schools, as well as its use by law enforcement and corporations," added Greer.
The new comments from Greer follow a study (pdf) out Monday from researchers at the University of Michigan's Ford School of Science, Technology, and Public Policy Program (STPP) entitled "Cameras in the Classroom."
"Schools have also begun to use [FR] to track students and visitors for a range of uses, from automating attendance to school security," the researchers wrote, though they noted that the technology's use in schools is "not yet widespread."
But, the authors added, there's good reason to stop its spread:
[O]ur analysis reveals that FR will likely have five types of implications: exacerbating racism, normalizing surveillance and eroding privacy, narrowing the definition of the "acceptable" student, commodifying data, and institutionalizing inaccuracy. Because FR is automated, it will extend these effects to more students than any manual system could.
FR "is likely to mimic the impacts of school resource officers (SROs), stop-and-frisk policies, and airport security," all of which "purport to be objective and neutral systems, but in practice they reflect the structural and systemic biases of the societies around them," the study says.
"All of these practices have had racist outcomes due to the users of the systems disproportionately targeting people of color," the researchers wrote.
The technology further stands to "normalize the experience of being constantly surveilled starting at a young age" and holds the possibility of "mission creep," the researchers warned, "as administrators expand the usage of the technology outside of what was originally defined."
According to lead author Shobita Parthasarathy, STPP director and professor of public policy, "The research shows that prematurely deploying the technology without understanding its implications would be unethical and dangerous."
Political revenge. Mass deportations. Project 2025. Unfathomable corruption. Attacks on Social Security, Medicare, and Medicaid. Pardons for insurrectionists. An all-out assault on democracy. Republicans in Congress are scrambling to give Trump broad new powers to strip the tax-exempt status of any nonprofit he doesn’t like by declaring it a “terrorist-supporting organization.” Trump has already begun filing lawsuits against news outlets that criticize him. At Common Dreams, we won’t back down, but we must get ready for whatever Trump and his thugs throw at us. Our Year-End campaign is our most important fundraiser of the year. As a people-powered nonprofit news outlet, we cover issues the corporate media never will, but we can only continue with our readers’ support. By donating today, please help us fight the dangers of a second Trump presidency. |
Privacy advocates on Monday urged lawmakers to ban facial recognition in schools in response to a new study finding that use of the technology in educational settings would likely lead to a number of negative consequences including the normalization of surveillance and worsening of racial biases.
"Using facial recognition in schools amounts to unethical experimentation on children," said Evan Greer, deputy director of Fight for the Future.
The digital rights group has been vocal in its opposition to facial recognition, or FR, and last year launched the BanFacialRecognition.com website along with dozens of other groups.
Greer, in her statement, said that the moves being made during the covoravirus crisis by companies that sell the technology are simply adding more urgency to the demand for a ban.
"We're already seeing surveillance vendors attempt to exploit the Covid-19 pandemic to push for the use of this ineffective, invasive, and blatantly racist technology," she said. "It's time to draw a line in the sand right now."
"Lawmakers should act quickly to ban facial recognition in schools, as well as its use by law enforcement and corporations," added Greer.
The new comments from Greer follow a study (pdf) out Monday from researchers at the University of Michigan's Ford School of Science, Technology, and Public Policy Program (STPP) entitled "Cameras in the Classroom."
"Schools have also begun to use [FR] to track students and visitors for a range of uses, from automating attendance to school security," the researchers wrote, though they noted that the technology's use in schools is "not yet widespread."
But, the authors added, there's good reason to stop its spread:
[O]ur analysis reveals that FR will likely have five types of implications: exacerbating racism, normalizing surveillance and eroding privacy, narrowing the definition of the "acceptable" student, commodifying data, and institutionalizing inaccuracy. Because FR is automated, it will extend these effects to more students than any manual system could.
FR "is likely to mimic the impacts of school resource officers (SROs), stop-and-frisk policies, and airport security," all of which "purport to be objective and neutral systems, but in practice they reflect the structural and systemic biases of the societies around them," the study says.
"All of these practices have had racist outcomes due to the users of the systems disproportionately targeting people of color," the researchers wrote.
The technology further stands to "normalize the experience of being constantly surveilled starting at a young age" and holds the possibility of "mission creep," the researchers warned, "as administrators expand the usage of the technology outside of what was originally defined."
According to lead author Shobita Parthasarathy, STPP director and professor of public policy, "The research shows that prematurely deploying the technology without understanding its implications would be unethical and dangerous."
Privacy advocates on Monday urged lawmakers to ban facial recognition in schools in response to a new study finding that use of the technology in educational settings would likely lead to a number of negative consequences including the normalization of surveillance and worsening of racial biases.
"Using facial recognition in schools amounts to unethical experimentation on children," said Evan Greer, deputy director of Fight for the Future.
The digital rights group has been vocal in its opposition to facial recognition, or FR, and last year launched the BanFacialRecognition.com website along with dozens of other groups.
Greer, in her statement, said that the moves being made during the covoravirus crisis by companies that sell the technology are simply adding more urgency to the demand for a ban.
"We're already seeing surveillance vendors attempt to exploit the Covid-19 pandemic to push for the use of this ineffective, invasive, and blatantly racist technology," she said. "It's time to draw a line in the sand right now."
"Lawmakers should act quickly to ban facial recognition in schools, as well as its use by law enforcement and corporations," added Greer.
The new comments from Greer follow a study (pdf) out Monday from researchers at the University of Michigan's Ford School of Science, Technology, and Public Policy Program (STPP) entitled "Cameras in the Classroom."
"Schools have also begun to use [FR] to track students and visitors for a range of uses, from automating attendance to school security," the researchers wrote, though they noted that the technology's use in schools is "not yet widespread."
But, the authors added, there's good reason to stop its spread:
[O]ur analysis reveals that FR will likely have five types of implications: exacerbating racism, normalizing surveillance and eroding privacy, narrowing the definition of the "acceptable" student, commodifying data, and institutionalizing inaccuracy. Because FR is automated, it will extend these effects to more students than any manual system could.
FR "is likely to mimic the impacts of school resource officers (SROs), stop-and-frisk policies, and airport security," all of which "purport to be objective and neutral systems, but in practice they reflect the structural and systemic biases of the societies around them," the study says.
"All of these practices have had racist outcomes due to the users of the systems disproportionately targeting people of color," the researchers wrote.
The technology further stands to "normalize the experience of being constantly surveilled starting at a young age" and holds the possibility of "mission creep," the researchers warned, "as administrators expand the usage of the technology outside of what was originally defined."
According to lead author Shobita Parthasarathy, STPP director and professor of public policy, "The research shows that prematurely deploying the technology without understanding its implications would be unethical and dangerous."