SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm," said one advocate.
Privacy advocates on Saturday said the AI Act, a sweeping proposed law to regulate artificial intelligence in the European Union whose language was finalized Friday, appeared likely to fail at protecting the public from one of AI's greatest threats: live facial recognition.
Representatives of the European Commission spent 37 hours this week negotiating provisions in the AI Act with the European Council and European Parliament, running up against Council representatives from France, Germany, and Italy who sought to water down the bill in the late stages of talks.
Thierry Breton, the European commissioner for internal market and a key negotiator of the deal, said the final product would establish the E.U. as "a pioneer, understanding the importance of its role as global standard setter."
But Amnesty Tech, the branch of global human rights group Amnesty International that focuses on technology and surveillance, was among the groups that raised concerns about the bloc's failure to include "an unconditional ban on live facial recognition," which was in an earlier draft, in the legislation.
The three institutions, said Mher Hakobyan, Amnesty Tech's advocacy adviser on AI, "in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally concerning AI regulation."
"While proponents argue that the draft allows only limited use of facial recognition and subject to safeguards, Amnesty's research in New York City, Occupied Palestinian Territories, Hyderabad, and elsewhere demonstrates that no safeguards can prevent the human rights harms that facial recognition inflicts, which is why an outright ban is needed," said Hakobyan. "Not ensuring a full ban on facial recognition is therefore a hugely missed opportunity to stop and prevent colossal damage to human rights, civic space, and rule of law that are already under threat throughout the E.U."
The bill is focused on protecting Europeans against other significant risks of AI, including the automation of jobs, the spread of misinformation, and national security threats.
Tech companies would be required to complete rigorous testing on AI software before operating in the EU, particularly for applications like self-driving vehicles.
Tools that could pose risks to hiring practices would also need to be subjected to risk assessments, and human oversight would be required in deploying the software,
AI systems including chatbots would be subjected to new transparency rules to avoid the creation of manipulated images and videos—known as deepfakes—without the public knowing that the images were generated by AI.
The indiscriminate scraping of internet or security footage images to create facial recognition databases would also be outright banned.
But the proposed AI Act, which could be passed before the end of the European Parliament session ends in May, includes exemptions to facial recognition provisions, allowing law enforcement agencies to use live facial recognition to search for human trafficking victims, prevent terrorist attacks, and arrest suspects of certain violent crimes.
Ella Jakubowska, a senior policy adviser at European Digital Rights, told The Washington Post that "some human rights safeguards have been won" in the AI Act.
"It's hard to be excited about a law which has, for the first time in the E.U., taken steps to legalize live public facial recognition across the bloc," Jakubowska toldReuters. "Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm."
Hakobyan also noted that the bill did not include a ban on "the export of harmful AI technologies, including for social scoring, which would be illegal in the E.U."
"Allowing European companies to profit off from technologies that the law recognizes impermissibly harm human rights in their home states establishes a dangerous double standard," said Hakobyan.
After passage, many AI Act provisions would not take effect for 12 to 24 months.
Andreas Liebl, managing director of the German company AppliedAI Initiative, acknowledged that the law would likely have an impact on tech companies' ability to operate in the European Union.
"There will be a couple of innovations that are just not possible or economically feasible anymore," Liebl told the Post.
But Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, toldThe New York Times that the E.U. will have to prove its "regulatory prowess" after the law is passed.
"Without strong enforcement," said Shrishak, "this deal will have no meaning."
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
Privacy advocates on Saturday said the AI Act, a sweeping proposed law to regulate artificial intelligence in the European Union whose language was finalized Friday, appeared likely to fail at protecting the public from one of AI's greatest threats: live facial recognition.
Representatives of the European Commission spent 37 hours this week negotiating provisions in the AI Act with the European Council and European Parliament, running up against Council representatives from France, Germany, and Italy who sought to water down the bill in the late stages of talks.
Thierry Breton, the European commissioner for internal market and a key negotiator of the deal, said the final product would establish the E.U. as "a pioneer, understanding the importance of its role as global standard setter."
But Amnesty Tech, the branch of global human rights group Amnesty International that focuses on technology and surveillance, was among the groups that raised concerns about the bloc's failure to include "an unconditional ban on live facial recognition," which was in an earlier draft, in the legislation.
The three institutions, said Mher Hakobyan, Amnesty Tech's advocacy adviser on AI, "in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally concerning AI regulation."
"While proponents argue that the draft allows only limited use of facial recognition and subject to safeguards, Amnesty's research in New York City, Occupied Palestinian Territories, Hyderabad, and elsewhere demonstrates that no safeguards can prevent the human rights harms that facial recognition inflicts, which is why an outright ban is needed," said Hakobyan. "Not ensuring a full ban on facial recognition is therefore a hugely missed opportunity to stop and prevent colossal damage to human rights, civic space, and rule of law that are already under threat throughout the E.U."
The bill is focused on protecting Europeans against other significant risks of AI, including the automation of jobs, the spread of misinformation, and national security threats.
Tech companies would be required to complete rigorous testing on AI software before operating in the EU, particularly for applications like self-driving vehicles.
Tools that could pose risks to hiring practices would also need to be subjected to risk assessments, and human oversight would be required in deploying the software,
AI systems including chatbots would be subjected to new transparency rules to avoid the creation of manipulated images and videos—known as deepfakes—without the public knowing that the images were generated by AI.
The indiscriminate scraping of internet or security footage images to create facial recognition databases would also be outright banned.
But the proposed AI Act, which could be passed before the end of the European Parliament session ends in May, includes exemptions to facial recognition provisions, allowing law enforcement agencies to use live facial recognition to search for human trafficking victims, prevent terrorist attacks, and arrest suspects of certain violent crimes.
Ella Jakubowska, a senior policy adviser at European Digital Rights, told The Washington Post that "some human rights safeguards have been won" in the AI Act.
"It's hard to be excited about a law which has, for the first time in the E.U., taken steps to legalize live public facial recognition across the bloc," Jakubowska toldReuters. "Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm."
Hakobyan also noted that the bill did not include a ban on "the export of harmful AI technologies, including for social scoring, which would be illegal in the E.U."
"Allowing European companies to profit off from technologies that the law recognizes impermissibly harm human rights in their home states establishes a dangerous double standard," said Hakobyan.
After passage, many AI Act provisions would not take effect for 12 to 24 months.
Andreas Liebl, managing director of the German company AppliedAI Initiative, acknowledged that the law would likely have an impact on tech companies' ability to operate in the European Union.
"There will be a couple of innovations that are just not possible or economically feasible anymore," Liebl told the Post.
But Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, toldThe New York Times that the E.U. will have to prove its "regulatory prowess" after the law is passed.
"Without strong enforcement," said Shrishak, "this deal will have no meaning."
Privacy advocates on Saturday said the AI Act, a sweeping proposed law to regulate artificial intelligence in the European Union whose language was finalized Friday, appeared likely to fail at protecting the public from one of AI's greatest threats: live facial recognition.
Representatives of the European Commission spent 37 hours this week negotiating provisions in the AI Act with the European Council and European Parliament, running up against Council representatives from France, Germany, and Italy who sought to water down the bill in the late stages of talks.
Thierry Breton, the European commissioner for internal market and a key negotiator of the deal, said the final product would establish the E.U. as "a pioneer, understanding the importance of its role as global standard setter."
But Amnesty Tech, the branch of global human rights group Amnesty International that focuses on technology and surveillance, was among the groups that raised concerns about the bloc's failure to include "an unconditional ban on live facial recognition," which was in an earlier draft, in the legislation.
The three institutions, said Mher Hakobyan, Amnesty Tech's advocacy adviser on AI, "in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally concerning AI regulation."
"While proponents argue that the draft allows only limited use of facial recognition and subject to safeguards, Amnesty's research in New York City, Occupied Palestinian Territories, Hyderabad, and elsewhere demonstrates that no safeguards can prevent the human rights harms that facial recognition inflicts, which is why an outright ban is needed," said Hakobyan. "Not ensuring a full ban on facial recognition is therefore a hugely missed opportunity to stop and prevent colossal damage to human rights, civic space, and rule of law that are already under threat throughout the E.U."
The bill is focused on protecting Europeans against other significant risks of AI, including the automation of jobs, the spread of misinformation, and national security threats.
Tech companies would be required to complete rigorous testing on AI software before operating in the EU, particularly for applications like self-driving vehicles.
Tools that could pose risks to hiring practices would also need to be subjected to risk assessments, and human oversight would be required in deploying the software,
AI systems including chatbots would be subjected to new transparency rules to avoid the creation of manipulated images and videos—known as deepfakes—without the public knowing that the images were generated by AI.
The indiscriminate scraping of internet or security footage images to create facial recognition databases would also be outright banned.
But the proposed AI Act, which could be passed before the end of the European Parliament session ends in May, includes exemptions to facial recognition provisions, allowing law enforcement agencies to use live facial recognition to search for human trafficking victims, prevent terrorist attacks, and arrest suspects of certain violent crimes.
Ella Jakubowska, a senior policy adviser at European Digital Rights, told The Washington Post that "some human rights safeguards have been won" in the AI Act.
"It's hard to be excited about a law which has, for the first time in the E.U., taken steps to legalize live public facial recognition across the bloc," Jakubowska toldReuters. "Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm."
Hakobyan also noted that the bill did not include a ban on "the export of harmful AI technologies, including for social scoring, which would be illegal in the E.U."
"Allowing European companies to profit off from technologies that the law recognizes impermissibly harm human rights in their home states establishes a dangerous double standard," said Hakobyan.
After passage, many AI Act provisions would not take effect for 12 to 24 months.
Andreas Liebl, managing director of the German company AppliedAI Initiative, acknowledged that the law would likely have an impact on tech companies' ability to operate in the European Union.
"There will be a couple of innovations that are just not possible or economically feasible anymore," Liebl told the Post.
But Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, toldThe New York Times that the E.U. will have to prove its "regulatory prowess" after the law is passed.
"Without strong enforcement," said Shrishak, "this deal will have no meaning."