Digital rights advocates on Tuesday welcomed Facebook's announcement that it plans to jettison its facial recognition system, which critics contend is dangerous and often inaccurate technology abused by governments and corporations to violate people's privacy and other rights.
"Corporate use of face surveillance is very dangerous to people's privacy."
Adam Schwartz, a senior staff attorney at the Electronic Frontier Foundation (EFF) who last month called facial recognition technology "a special menace to privacy, racial justice, free expression, and information security," commended the new Facebook policy.
"Facebook getting out of the face recognition business is a pivotal moment in the growing national discomfort with this technology," he said. "Corporate use of face surveillance is very dangerous to people's privacy."
The social networking giant first introduced facial recognition software in late 2010 as a feature to help users identify and "tag" friends without the need to comb through photos. The company subsequently amassed one of the world's largest digital photo archives, which was largely compiled through the system. Facebook says over one billion of those photos will be deleted, although the company will keep DeepFace, the advanced algorithm that powers the facial recognition system.
In a blog post, Jerome Presenti, the vice president of artificial intelligence at Meta--the new name of Facebook's parent company following a rebranding last week that was widely condemned as a ploy to distract from recent damning whistleblower revelations--described the policy change as "one of the largest shifts in facial recognition usage in the technology's history."
"The many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole," he wrote.
The New York Timesreports:
Facial recognition technology, which has advanced in accuracy and power in recent years, has increasingly been the focus of debate because of how it can be misused by governments, law enforcement, and companies. In China, authorities use the capabilities to track and control the Uighurs, a largely Muslim minority. In the United States, law enforcement has turned to the software to aid policing, leading to fears of overreach and mistaken arrests.
Concerns over actual and potential misuse of facial recognition systems have prompted bans on the technology in over a dozen U.S. locales, beginning with San Francisco in 2019 and subsequently proliferating from Portland, Maine to Portland, Oregon.
Caitlin Seeley George, campaign director at Fight for the Future, was among the online privacy campaigners who welcomed Facebook's move. In a statement, she said that "facial recognition is one of the most dangerous and politically toxic technologies ever created. Even Facebook knows that."
Seeley George continued:
From misidentifying Black and Brown people (which has already led to wrongful arrests) to making it impossible to move through our lives without being constantly surveilled, we cannot trust governments, law enforcement, or private companies with this kind of invasive surveillance.
"Even as algorithms improve, facial recognition will only be more dangerous," she argued. "This technology will enable authoritarian governments to target and crack down on religious minorities and political dissent; it will automate the funneling of people into prisons without making us safer; it will create new tools for stalking, abuse, and identity theft."
Seeley George says the "only logical action" for lawmakers and companies to take is banning facial recognition.
Amid applause for the company's announcement, some critics took exception to Facebook's retention of DeepFace, as well as its consideration of "potential future applications" for facial recognition technology.