SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm," said one advocate.
Privacy advocates on Saturday said the AI Act, a sweeping proposed law to regulate artificial intelligence in the European Union whose language was finalized Friday, appeared likely to fail at protecting the public from one of AI's greatest threats: live facial recognition.
Representatives of the European Commission spent 37 hours this week negotiating provisions in the AI Act with the European Council and European Parliament, running up against Council representatives from France, Germany, and Italy who sought to water down the bill in the late stages of talks.
Thierry Breton, the European commissioner for internal market and a key negotiator of the deal, said the final product would establish the E.U. as "a pioneer, understanding the importance of its role as global standard setter."
But Amnesty Tech, the branch of global human rights group Amnesty International that focuses on technology and surveillance, was among the groups that raised concerns about the bloc's failure to include "an unconditional ban on live facial recognition," which was in an earlier draft, in the legislation.
The three institutions, said Mher Hakobyan, Amnesty Tech's advocacy adviser on AI, "in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally concerning AI regulation."
"While proponents argue that the draft allows only limited use of facial recognition and subject to safeguards, Amnesty's research in New York City, Occupied Palestinian Territories, Hyderabad, and elsewhere demonstrates that no safeguards can prevent the human rights harms that facial recognition inflicts, which is why an outright ban is needed," said Hakobyan. "Not ensuring a full ban on facial recognition is therefore a hugely missed opportunity to stop and prevent colossal damage to human rights, civic space, and rule of law that are already under threat throughout the E.U."
The bill is focused on protecting Europeans against other significant risks of AI, including the automation of jobs, the spread of misinformation, and national security threats.
Tech companies would be required to complete rigorous testing on AI software before operating in the EU, particularly for applications like self-driving vehicles.
Tools that could pose risks to hiring practices would also need to be subjected to risk assessments, and human oversight would be required in deploying the software,
AI systems including chatbots would be subjected to new transparency rules to avoid the creation of manipulated images and videos—known as deepfakes—without the public knowing that the images were generated by AI.
The indiscriminate scraping of internet or security footage images to create facial recognition databases would also be outright banned.
But the proposed AI Act, which could be passed before the end of the European Parliament session ends in May, includes exemptions to facial recognition provisions, allowing law enforcement agencies to use live facial recognition to search for human trafficking victims, prevent terrorist attacks, and arrest suspects of certain violent crimes.
Ella Jakubowska, a senior policy adviser at European Digital Rights, told The Washington Post that "some human rights safeguards have been won" in the AI Act.
"It's hard to be excited about a law which has, for the first time in the E.U., taken steps to legalize live public facial recognition across the bloc," Jakubowska toldReuters. "Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm."
Hakobyan also noted that the bill did not include a ban on "the export of harmful AI technologies, including for social scoring, which would be illegal in the E.U."
"Allowing European companies to profit off from technologies that the law recognizes impermissibly harm human rights in their home states establishes a dangerous double standard," said Hakobyan.
After passage, many AI Act provisions would not take effect for 12 to 24 months.
Andreas Liebl, managing director of the German company AppliedAI Initiative, acknowledged that the law would likely have an impact on tech companies' ability to operate in the European Union.
"There will be a couple of innovations that are just not possible or economically feasible anymore," Liebl told the Post.
But Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, toldThe New York Times that the E.U. will have to prove its "regulatory prowess" after the law is passed.
"Without strong enforcement," said Shrishak, "this deal will have no meaning."
"By turning the AI Bill of Rights from a nonbinding statement of principles into federal policy, your administration would send a clear message to both private actors and federal regulators."
Amid the rapid development and deployment of artificial intelligence systems, a pair of Democratic U.S. lawmakers on Wednesday led more than a dozen of their colleagues in urging President Joe Biden to issue an executive order making the White House's "AI Bill of Rights" official federal policy.
Sen. Ed Markey (D-Mass.) and Congressional Progressive Caucus Chair Pramila Jayapal (D-Wash.) spearheaded a letter to Biden asserting that "the federal government's commitment to the AI Bill of Rights would show that fundamental rights will not take a back seat in the AI era."
"By turning the AI Bill of Rights from a nonbinding statement of principles into federal policy, your administration would send a clear message to both private actors and federal regulators: AI systems must be developed with guardrails," the letter states. "Doing so would also strengthen your administration's efforts to advance racial equity and support underserved communities, building on important work from previous executive orders."
The lawmakers asserted that implementing the AI Bill of Rights is "a crucial step in developing an ethical framework for the federal government's role" in artificial intelligence. They stressed that five principles—"safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration, and fallback"—must be the core of the policy.
The letter further argues that "implementing these principles will not only protect communities harmed by these technologies, it will also help inform ongoing policy conversations in Congress and show clear leadership on the global stage."
In July, the White House secured voluntary risk management commitments from seven leading AI companies, a move praised by campaigners and experts—even as they stressed the need for further action from Congress and federal regulators.
Earlier this year, Markey and Rep. Doris Matsui (D-Calif.) reintroduced the Algorithmic Justice and Online Platform Transparency Act, which would prohibit Big Tech from using black-box algorithms that drive discrimination and inequality.
Jayapal, Markey, and Sen. Jeff Merkley (D-Ore.) in March led the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act, which would stop the government from using facial recognition and other biometric technologies, which they said "pose significant privacy and civil liberties issues and disproportionately harm marginalized communities."
Wednesday's letter came as the consumer advocacy group Public Citizen urged the Federal Election Commission to officially affirm that so-called "deepfakes" in U.S. political campaign communications are illegal under existing legislation proscribing fraudulent representation.
The lawmakers' call also comes just weeks after Public Citizen warned that Big Tech is creating and deploying AI systems "that deceptively mimic human behavior to aggressively sell their products and services, dispense dubious medical and mental health advice, and trap people in psychologically dependent, potentially toxic relationships with machines."
For many, the abuses of technology are not some future threat.
Technology is often hailed as an engine of progress, but if unleashed without democratic oversight, it can cause great harm. Artificial Intelligence, or AI, is the latest example. “Mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks such as pandemics and nuclear war,” warned over 250 computer scientists in a one-sentence statement issued by the Center for AI Safety. They worry that Artificial Intelligence will outpace human intelligence, then orchestrate our demise as a species.
But for many, the abuses of technology are not some future threat. Take the cases of Henrietta Lacks, who died of cervical cancer in 1951, and a more recent example, in 2023, of another Black woman, Porcha Woodruff, a young Detroit mother wrongly arrested for armed robbery and carjacking after being misidentified by AI-driven facial recognition software.
“The six police officers came to knock on the door,” Porcha Woodruff said on the Democracy Now! news hour, recounting her arrest with “a warrant for my arrest for carjacking. In the midst of the conversation, I opened up my door a little bit wider so that they could see I was eight months pregnant…I went back and forth with the police officers for a while, trying to convince them, ‘You have the wrong person.’”
Porcha Woodruff was handcuffed in front of her two young, terrified daughters and jailed. The actual perpetrator’s face had been recorded by a camera, and facial recognition software pointed to Porcha. She was the first woman known to have been arrested due to faulty facial recognition software. At least five men have been similarly wrongly arrested. All six are Black. Porcha was held for eleven hours, released on $100,000 bond. She began having contractions in the jail cell. She immediately rushed to the hospital after getting out, where she was treated for dehydration.
“In 2019, the government shared a study showing that African American faces and Asian faces were 10 to 100 times more likely to be misidentified,” Joy Buolamwini, founder of the Algorithmic Justice League, explained on Democracy Now! “In many instances, the worst performance is on the faces of Black women. When you look at the data and what we’ve recorded on the performance of facial recognition technologies, it does mean people of color, women of color, Black women, in particular, are at even higher risk of these types of misidentifications.”
Porcha Woodruff is 32 years old. Henrietta Lacks was a 31-year-old mother of five, who went to Johns Hopkins, the only hospital in Baltimore that would see Black patients in the early 1950s. “She ended up going under anesthetic to get a biopsy of her cervix,” Rebecca Skloot said on Democracy Now! Skloot is the author of the bestselling biography, “The Immortal Life of Henrietta Lacks,” also made into a film starring Oprah Winfrey. “That’s when this doctor just took a little extra piece and put that in a dish and sent it to George Gey, who was the head of tissue culture research and had been trying to grow cells for decades. They had been able to keep cells alive for maybe 24 hours in the past, but hers —not only did they not die, but they began doubling their numbers every 24 hours. So they just grew with this incredible intensity that no one had ever seen before.”
Henrietta Lacks died of cancer not long after, but her cells lived on, becoming a cornerstone of biomedical research. The cells taken from Henrietta Lacks without her permission have helped cure or treat countless diseases, from polio to HIV to HPV, helped develop vaccines and other medicines, and to map the human genome. Doctors from Johns Hopkins continued to deceive her family members, subjecting them to studies in an attempt to learn why her cells were able to survive.
Johns Hopkins called Henrietta Lacks’ cells “HeLa cells,” claiming they came from a fictitious person, “Helen Lane.”. Many companies profited from her cells. On August 1st, her family settled with one company, Thermo Fisher Scientific.
One of her grandsons, Alfred Lacks Carter, Jr., announced, “Our family member, our loved one, Henrietta Lacks, 103 years old today… it couldn’t have been a more fitting day for her to have justice, for her family to have relief. It was a long fight, over 70 years. And Henrietta Lacks gets her day.”
Hopefully, Porcha Woodruff will get her day, too. She is suing Detroit for wrongful arrest and imprisonment, malicious prosecution, and for its use of the demonstrably racist AI-driven facial recognition software. She could well be the impetus for passage of the Facial Recognition and Biometric Technology Moratorium Act now before Congress.
From Henrietta Lacks to Porcha Woodruff, it is past time we recognize and reject racist abuses of technology.