SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm," said one advocate.
Privacy advocates on Saturday said the AI Act, a sweeping proposed law to regulate artificial intelligence in the European Union whose language was finalized Friday, appeared likely to fail at protecting the public from one of AI's greatest threats: live facial recognition.
Representatives of the European Commission spent 37 hours this week negotiating provisions in the AI Act with the European Council and European Parliament, running up against Council representatives from France, Germany, and Italy who sought to water down the bill in the late stages of talks.
Thierry Breton, the European commissioner for internal market and a key negotiator of the deal, said the final product would establish the E.U. as "a pioneer, understanding the importance of its role as global standard setter."
But Amnesty Tech, the branch of global human rights group Amnesty International that focuses on technology and surveillance, was among the groups that raised concerns about the bloc's failure to include "an unconditional ban on live facial recognition," which was in an earlier draft, in the legislation.
The three institutions, said Mher Hakobyan, Amnesty Tech's advocacy adviser on AI, "in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally concerning AI regulation."
"While proponents argue that the draft allows only limited use of facial recognition and subject to safeguards, Amnesty's research in New York City, Occupied Palestinian Territories, Hyderabad, and elsewhere demonstrates that no safeguards can prevent the human rights harms that facial recognition inflicts, which is why an outright ban is needed," said Hakobyan. "Not ensuring a full ban on facial recognition is therefore a hugely missed opportunity to stop and prevent colossal damage to human rights, civic space, and rule of law that are already under threat throughout the E.U."
The bill is focused on protecting Europeans against other significant risks of AI, including the automation of jobs, the spread of misinformation, and national security threats.
Tech companies would be required to complete rigorous testing on AI software before operating in the EU, particularly for applications like self-driving vehicles.
Tools that could pose risks to hiring practices would also need to be subjected to risk assessments, and human oversight would be required in deploying the software,
AI systems including chatbots would be subjected to new transparency rules to avoid the creation of manipulated images and videos—known as deepfakes—without the public knowing that the images were generated by AI.
The indiscriminate scraping of internet or security footage images to create facial recognition databases would also be outright banned.
But the proposed AI Act, which could be passed before the end of the European Parliament session ends in May, includes exemptions to facial recognition provisions, allowing law enforcement agencies to use live facial recognition to search for human trafficking victims, prevent terrorist attacks, and arrest suspects of certain violent crimes.
Ella Jakubowska, a senior policy adviser at European Digital Rights, told The Washington Post that "some human rights safeguards have been won" in the AI Act.
"It's hard to be excited about a law which has, for the first time in the E.U., taken steps to legalize live public facial recognition across the bloc," Jakubowska toldReuters. "Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm."
Hakobyan also noted that the bill did not include a ban on "the export of harmful AI technologies, including for social scoring, which would be illegal in the E.U."
"Allowing European companies to profit off from technologies that the law recognizes impermissibly harm human rights in their home states establishes a dangerous double standard," said Hakobyan.
After passage, many AI Act provisions would not take effect for 12 to 24 months.
Andreas Liebl, managing director of the German company AppliedAI Initiative, acknowledged that the law would likely have an impact on tech companies' ability to operate in the European Union.
"There will be a couple of innovations that are just not possible or economically feasible anymore," Liebl told the Post.
But Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, toldThe New York Times that the E.U. will have to prove its "regulatory prowess" after the law is passed.
"Without strong enforcement," said Shrishak, "this deal will have no meaning."
The agreement "is a step in the right direction for security," said one observer, "but that's not the only area where AI can cause harm."
Like an executive order introduced by U.S. President Joe Biden last month, a global agreement on artificial intelligence released Sunday was seen by experts as a positive step forward—but one that would require more action from policymakers to ensure AI isn't harmful to workers, democratic systems, and the privacy of people around the world.
The 20-page agreement, first reported Monday, was reached by 18 countries including the U.S., U.K., Germany, Israel, and Nigeria, and was billed as a deal that would push companies to keep AI systems "secure by design."
The agreement is nonbinding and deals with four main areas: secure design, development, deployment, and operation and maintenance.
Policymakers including the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, forged the agreement with a heavy focus on keeping AI technology safe from hackers and security breaches.
The document includes recommendations such as implementing standard cybersecurity best practices, monitoring the security of an AI supply chain across the system's life cycle, and releasing models "only after subjecting them to appropriate and effective security evaluation."
"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly toldReuters. The document, she said, represents an "agreement that the most important thing that needs to be done at the design phase is security."
Norm Eisen, senior fellow at the think tank Brookings Institution, said the deal "is a step in the right direction for security" in a field that U.K. experts recently warned is vulnerable to hackers who could launch "prompt injection" attacks, causing an AI model to behave in a way that the designer didn't intend or reveal private information.
"But that's not the only area where AI can cause harm," Eisen said on social media.
Eisen pointed to a recent Brrokings analysis about how AI could "weaken" democracy in the U.S. and other countries, worsening the "flood of misinformation" with deepfakes and other AI-generated images.
"Advocacy groups or individuals looking to misrepresent public opinion may find an ally in AI," wrote Eisen, along with Nicol Turner Lee, Colby Galliher, and Jonathan Katz last week. "AI-fueled programs, like ChatGPT, can fabricate letters to elected officials, public comments, and other written endorsements of specific bills or positions that are often difficult to distinguish from those written by actual constituents... Much worse, voice and image replicas harnessed from generative AI tools can also mimic candidates and elected officials. These tactics could give rise to voter confusion and degrade confidence in the electoral process if voters become aware of such scams."
At AppleInsider, tech writer Malcolm Owen denounced Sunday's agreement as "toothless and weak," considering it does not require policymakers or companies to adhere to the guidelines.
Owen noted that tech firms including Google, Amazon, and Palantir consulted with global government agencies in developing the guidelines.
"These are all guidelines, not rules that must be obeyed," wrote Owen. "There are no penalties for not following what is outlined, and no introduction of laws. The document is just a wish list of things that governments want AI makers to really think about... And, it's not clear when or if legislation will arrive mandating what's in the document."
European Union member countries passed a draft of what the European Parliament called "the world's first comprehensive AI law" earlier this year with the AI Act. The law would require AI systems makers to publish summaries of the training material they use and prove that they will not generate illegal content. It would also bar companies from scraping biometric data from social media, which a U.S. AI company was found to be doing last year.
"AI tools are evolving rapidly," said Eisen on Monday, "and policymakers need to keep up."
"It's time to get serious about advanced AI systems," said one computer science professor. "These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless."
Amid preparations for a global artificial intelligence safety summit in the United Kingdom, two dozen AI experts on Tuesday released a short paper and policy supplement urging humanity to "address ongoing harms and anticipate emerging risks" associated with the rapidly developing technology.
The experts—including Yoshua Bengio, Geoffrey Hinton, and Andrew Yao—wrote that "AI may be the technology that shapes this century. While AI capabilities are advancing rapidly, progress in safety and governance is lagging behind. To steer AI toward positive outcomes and away from catastrophe, we need to reorient. There is a responsible path, if we have the wisdom to take it."
Already, "high deep learning systems can write software, generate photorealistic scenes on demand, advise on intellectual topics, and combine language and image processing to steer robots," they noted, stressing how much advancement has come in just the past few years. "There is no fundamental reason why AI progress would slow or halt at the human level."
"Once autonomous AI systems pursue undesirable goals, embedded by malicious actors or by accident, we may be unable to keep them in check."
Given that "AI systems could rapidly come to outperform humans in an increasing number of tasks," the experts warned, "if such systems are not carefully designed and deployed, they pose a range of societal-scale risks."
"They threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society," the experts wrote. "They could also enable large-scale criminal or terrorist activities. Especially in the hands of a few powerful actors, AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance."
"Many of these risks could soon be amplified, and new risks created, as companies are developing autonomous AI: systems that can plan, act in the world, and pursue goals," they highlighted. "Once autonomous AI systems pursue undesirable goals, embedded by malicious actors or by accident, we may be unable to keep them in check."
"AI assistants are already co-writing a large share of computer code worldwide; future AI systems could insert and then exploit security vulnerabilities to control the computer systems behind our communication, media, banking, supply chains, militaries, and governments," they explained. "In open conflict, AI systems could threaten with or use autonomous or biological weapons. AI having access to such technology would merely continue existing trends to automate military activity, biological research, and AI development itself. If AI systems pursued such strategies with sufficient skill, it would be difficult for humans to intervene."
The experts asserted that until sufficient regulations exist, major companies should "lay out if-then commitments: specific safety measures they will take if specific red-line capabilities are found in their AI systems." They are also calling on tech giants and public funders to put at least a third of their artificial intelligence research and development budgets toward "ensuring safety and ethical use, comparable to their funding for AI capabilities."
Meanwhile, policymakers must get to work. According to the experts:
To keep up with rapid progress and avoid inflexible laws, national institutions need strong technical expertise and the authority to act swiftly. To address international race dynamics, they need the affordance to facilitate international agreements and partnerships. To protect low-risk use and academic research, they should avoid undue bureaucratic hurdles for small and predictable AI models. The most pressing scrutiny should be on AI systems at the frontier: a small number of most powerful AI systems—trained on billion-dollar supercomputers—which will have the most hazardous and unpredictable capabilities.
To enable effective regulation, governments urgently need comprehensive insight into AI development. Regulators should require model registration, whistleblower protections, incident reporting, and monitoring of model development and supercomputer usage. Regulators also need access to advanced AI systems before deployment to evaluate them for dangerous capabilities such as autonomous self-replication, breaking into computer systems, or making pandemic pathogens widely accessible.
The experts also advocated for holding frontier AI developers and owners legally accountable for harms "that can be reasonably foreseen and prevented." As for future systems that could evade human control, they wrote, "governments must be prepared to license their development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready."
Stuart Russell, one of the experts behind the documents and a computer science professor at the University of California, Berkeley, toldThe Guardian that "there are more regulations on sandwich shops than there are on AI companies."
"It's time to get serious about advanced AI systems," Russell said. "These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless."
In the United States, President Joe Biden plans to soon unveil an AI executive order, and U.S. Sens. Brian Schatz (D-Hawaii) and John Kennedy (R-La.) on Tuesday introduced a generative artificial intelligence bill welcomed by advocates.
"Generative AI threatens to plunge us into a world of fraud, deceit, disinformation, and confusion on a never-before-seen scale," said Public Citizen's Richard Anthony. "The Schatz-Kennedy AI Labeling Act would steer us away from this dystopian future by ensuring we can distinguish between content from humans and content from machines."