SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
One expert said the law "is littered with concessions to industry lobbying, exemptions for the most dangerous uses of AI by law enforcement and migration authorities, and prohibitions... full of loopholes."
As European Union policymakers on Wednesday lauded the approval of the Artificial Intelligence Act, critics warn the legislation represents a giveaway to corporate interests and falls short in key areas.
Daniel Leufer, a senior policy analyst at the Brussels office of advocacy group Access Now, called the bloc's landmark AI legislation "a failure from a human rights perspective and a victory for industry and police."
Following negotiations to finalize the AI Act in December, the world's first sweeping regulations for the rapidly evolving technology were adopted by members of the European Parliament 523-46 with 49 abstentions. After some final formalities, the law is expected to take effect in May or June, with various provisions entering into force over the next few years.
"Even though adopting the world's first rules on the development and deployment of AI technologies is a milestone, it is disappointing that the E.U. and its 27 member states chose to prioritize the interest of industry and law enforcement agencies over protecting people and their human rights," said Mher Hakobyan, Amnesty International's advocacy adviser on artificial intelligence.
The law applies a "risk-based approach" to AI products and services. As The Associated Pressreported Wednesday:
The vast majority of AI systems are expected to be low risk, such as content recommendation systems or spam filters. Companies can choose to follow voluntary requirements and codes of conduct.
High-risk uses of AI, such as in medical devices or critical infrastructure like water or electrical networks, face tougher requirements like using high-quality data and providing clear information to users.
Some AI uses are banned because they're deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, some types of predictive policing, and emotion recognition systems in school and workplaces.
Other banned uses include police scanning faces in public using AI-powered remote "biometric identification" systems, except for serious crimes like kidnapping or terrorism.
While some praised positive commonsense guidelines and protections, Leufer said that "the new AI Act is littered with concessions to industry lobbying, exemptions for the most dangerous uses of AI by law enforcement and migration authorities, and prohibitions so full of loopholes that they don't actually ban some of the most dangerous uses of AI."
Along with also expressing concerns about how the law will impact migrants, refugees, and asylum-seekers, Hakobyan highlighted that "it does not ban the reckless use and export of draconian AI technologies."
Access Now and Amnesty are part of the #ProtectNotSurveil coalition, which released a joint statement warning that the AI Act "sets a dangerous precedent," particularly with its exemptions for law enforcement, migration officials, and national security.
Other members of the coalition include EuroMed Rights, European Digital Rights, and Statewatch, whose executive director, Chris Jones, said in a statement that "the AI Act might be a new law but it fits into a much older story in which E.U. governments and agencies—including Frontex—have violated the rights of migrants and refugees for decades."
Frontex—officially the European Border and Coast Guard Agency—has long faced criticism from human rights groups for failing to protect people entering the bloc, particularly those traveling by sea.
"Implemented along with a swathe of new restrictive asylum and migration laws, the AI Act will lead to the use of digital technologies in new and harmful ways to shore up 'Fortress Europe' and to limit the arrival of vulnerable people seeking safety," Jones warned. "Civil society coalitions across and beyond Europe should work together to mitigate the worst effects of these laws, and continue to towards building societies that prioritize care over surveillance and criminalization."
"It has severe shortcomings from the point of view of fundamental rights and should not be treated as a golden standard for rights-based AI regulation."
Campaigners hope policymakers worldwide now take lessons from this legislative process.
In a Wednesday op-ed, Laura Lazaro Cabrera, counsel and director of Center for Democracy & Technology Europe's Equity and Data Program, argued the law "will become the benchmark for AI regulation globally in what has become a race against the clock as lawmakers grapple with a fast-moving development of a technology with far-reaching impacts on our basic human rights."
After the vote, Lazaro Cabrera stressed that "there's so much at stake in the implementation of the AI Act and so, as the dust settles, we all face the difficult task of unpacking a complex, lengthy, and unprecedented law. Close coordination with experts and civil society will be crucial to ensure that the act's interpretation and application mean that it is effective and consistent with the act's own articulated goals: protecting human rights, democracy, and the rule of law."
European Center for Not-for-Profit Law's Karolina Iwańska responded similarly: "Let's be clear: It has severe shortcomings from the point of view of fundamental rights and should not be treated as a golden standard for rights-based AI regulation. Having said that, we will work on the strongest possible implementation."
Yannis Vardakastanis, president of the European Disability Forum, said in a statement that "the AI Act addresses human rights, but not as comprehensively as we hoped for—we now call on the European Union to close this gap with future initiatives."
Amnesty's Hakobyan emphasized that "countries outside of the E.U. should learn from the bloc's failure to adequately regulate AI technologies and must not succumb to pressures by the technology industry and law enforcement authorities whilst developing regulation. States should instead put in place robust and binding AI legislation which prioritizes people and their rights."
"Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm," said one advocate.
Privacy advocates on Saturday said the AI Act, a sweeping proposed law to regulate artificial intelligence in the European Union whose language was finalized Friday, appeared likely to fail at protecting the public from one of AI's greatest threats: live facial recognition.
Representatives of the European Commission spent 37 hours this week negotiating provisions in the AI Act with the European Council and European Parliament, running up against Council representatives from France, Germany, and Italy who sought to water down the bill in the late stages of talks.
Thierry Breton, the European commissioner for internal market and a key negotiator of the deal, said the final product would establish the E.U. as "a pioneer, understanding the importance of its role as global standard setter."
But Amnesty Tech, the branch of global human rights group Amnesty International that focuses on technology and surveillance, was among the groups that raised concerns about the bloc's failure to include "an unconditional ban on live facial recognition," which was in an earlier draft, in the legislation.
The three institutions, said Mher Hakobyan, Amnesty Tech's advocacy adviser on AI, "in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally concerning AI regulation."
"While proponents argue that the draft allows only limited use of facial recognition and subject to safeguards, Amnesty's research in New York City, Occupied Palestinian Territories, Hyderabad, and elsewhere demonstrates that no safeguards can prevent the human rights harms that facial recognition inflicts, which is why an outright ban is needed," said Hakobyan. "Not ensuring a full ban on facial recognition is therefore a hugely missed opportunity to stop and prevent colossal damage to human rights, civic space, and rule of law that are already under threat throughout the E.U."
The bill is focused on protecting Europeans against other significant risks of AI, including the automation of jobs, the spread of misinformation, and national security threats.
Tech companies would be required to complete rigorous testing on AI software before operating in the EU, particularly for applications like self-driving vehicles.
Tools that could pose risks to hiring practices would also need to be subjected to risk assessments, and human oversight would be required in deploying the software,
AI systems including chatbots would be subjected to new transparency rules to avoid the creation of manipulated images and videos—known as deepfakes—without the public knowing that the images were generated by AI.
The indiscriminate scraping of internet or security footage images to create facial recognition databases would also be outright banned.
But the proposed AI Act, which could be passed before the end of the European Parliament session ends in May, includes exemptions to facial recognition provisions, allowing law enforcement agencies to use live facial recognition to search for human trafficking victims, prevent terrorist attacks, and arrest suspects of certain violent crimes.
Ella Jakubowska, a senior policy adviser at European Digital Rights, told The Washington Post that "some human rights safeguards have been won" in the AI Act.
"It's hard to be excited about a law which has, for the first time in the E.U., taken steps to legalize live public facial recognition across the bloc," Jakubowska toldReuters. "Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm."
Hakobyan also noted that the bill did not include a ban on "the export of harmful AI technologies, including for social scoring, which would be illegal in the E.U."
"Allowing European companies to profit off from technologies that the law recognizes impermissibly harm human rights in their home states establishes a dangerous double standard," said Hakobyan.
After passage, many AI Act provisions would not take effect for 12 to 24 months.
Andreas Liebl, managing director of the German company AppliedAI Initiative, acknowledged that the law would likely have an impact on tech companies' ability to operate in the European Union.
"There will be a couple of innovations that are just not possible or economically feasible anymore," Liebl told the Post.
But Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, toldThe New York Times that the E.U. will have to prove its "regulatory prowess" after the law is passed.
"Without strong enforcement," said Shrishak, "this deal will have no meaning."
The agreement "is a step in the right direction for security," said one observer, "but that's not the only area where AI can cause harm."
Like an executive order introduced by U.S. President Joe Biden last month, a global agreement on artificial intelligence released Sunday was seen by experts as a positive step forward—but one that would require more action from policymakers to ensure AI isn't harmful to workers, democratic systems, and the privacy of people around the world.
The 20-page agreement, first reported Monday, was reached by 18 countries including the U.S., U.K., Germany, Israel, and Nigeria, and was billed as a deal that would push companies to keep AI systems "secure by design."
The agreement is nonbinding and deals with four main areas: secure design, development, deployment, and operation and maintenance.
Policymakers including the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, forged the agreement with a heavy focus on keeping AI technology safe from hackers and security breaches.
The document includes recommendations such as implementing standard cybersecurity best practices, monitoring the security of an AI supply chain across the system's life cycle, and releasing models "only after subjecting them to appropriate and effective security evaluation."
"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly toldReuters. The document, she said, represents an "agreement that the most important thing that needs to be done at the design phase is security."
Norm Eisen, senior fellow at the think tank Brookings Institution, said the deal "is a step in the right direction for security" in a field that U.K. experts recently warned is vulnerable to hackers who could launch "prompt injection" attacks, causing an AI model to behave in a way that the designer didn't intend or reveal private information.
"But that's not the only area where AI can cause harm," Eisen said on social media.
Eisen pointed to a recent Brrokings analysis about how AI could "weaken" democracy in the U.S. and other countries, worsening the "flood of misinformation" with deepfakes and other AI-generated images.
"Advocacy groups or individuals looking to misrepresent public opinion may find an ally in AI," wrote Eisen, along with Nicol Turner Lee, Colby Galliher, and Jonathan Katz last week. "AI-fueled programs, like ChatGPT, can fabricate letters to elected officials, public comments, and other written endorsements of specific bills or positions that are often difficult to distinguish from those written by actual constituents... Much worse, voice and image replicas harnessed from generative AI tools can also mimic candidates and elected officials. These tactics could give rise to voter confusion and degrade confidence in the electoral process if voters become aware of such scams."
At AppleInsider, tech writer Malcolm Owen denounced Sunday's agreement as "toothless and weak," considering it does not require policymakers or companies to adhere to the guidelines.
Owen noted that tech firms including Google, Amazon, and Palantir consulted with global government agencies in developing the guidelines.
"These are all guidelines, not rules that must be obeyed," wrote Owen. "There are no penalties for not following what is outlined, and no introduction of laws. The document is just a wish list of things that governments want AI makers to really think about... And, it's not clear when or if legislation will arrive mandating what's in the document."
European Union member countries passed a draft of what the European Parliament called "the world's first comprehensive AI law" earlier this year with the AI Act. The law would require AI systems makers to publish summaries of the training material they use and prove that they will not generate illegal content. It would also bar companies from scraping biometric data from social media, which a U.S. AI company was found to be doing last year.
"AI tools are evolving rapidly," said Eisen on Monday, "and policymakers need to keep up."