cybersecurity
US Among 18 Countries to Reach Deal on Keeping AI 'Secure by Design'
The agreement "is a step in the right direction for security," said one observer, "but that's not the only area where AI can cause harm."
Like an executive order introduced by U.S. President Joe Biden last month, a global agreement on artificial intelligence released Sunday was seen by experts as a positive step forward—but one that would require more action from policymakers to ensure AI isn't harmful to workers, democratic systems, and the privacy of people around the world.
The 20-page agreement, first reported Monday, was reached by 18 countries including the U.S., U.K., Germany, Israel, and Nigeria, and was billed as a deal that would push companies to keep AI systems "secure by design."
The agreement is nonbinding and deals with four main areas: secure design, development, deployment, and operation and maintenance.
Policymakers including the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, forged the agreement with a heavy focus on keeping AI technology safe from hackers and security breaches.
The document includes recommendations such as implementing standard cybersecurity best practices, monitoring the security of an AI supply chain across the system's life cycle, and releasing models "only after subjecting them to appropriate and effective security evaluation."
"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly toldReuters. The document, she said, represents an "agreement that the most important thing that needs to be done at the design phase is security."
Norm Eisen, senior fellow at the think tank Brookings Institution, said the deal "is a step in the right direction for security" in a field that U.K. experts recently warned is vulnerable to hackers who could launch "prompt injection" attacks, causing an AI model to behave in a way that the designer didn't intend or reveal private information.
"But that's not the only area where AI can cause harm," Eisen said on social media.
Eisen pointed to a recent Brrokings analysis about how AI could "weaken" democracy in the U.S. and other countries, worsening the "flood of misinformation" with deepfakes and other AI-generated images.
"Advocacy groups or individuals looking to misrepresent public opinion may find an ally in AI," wrote Eisen, along with Nicol Turner Lee, Colby Galliher, and Jonathan Katz last week. "AI-fueled programs, like ChatGPT, can fabricate letters to elected officials, public comments, and other written endorsements of specific bills or positions that are often difficult to distinguish from those written by actual constituents... Much worse, voice and image replicas harnessed from generative AI tools can also mimic candidates and elected officials. These tactics could give rise to voter confusion and degrade confidence in the electoral process if voters become aware of such scams."
At AppleInsider, tech writer Malcolm Owen denounced Sunday's agreement as "toothless and weak," considering it does not require policymakers or companies to adhere to the guidelines.
Owen noted that tech firms including Google, Amazon, and Palantir consulted with global government agencies in developing the guidelines.
"These are all guidelines, not rules that must be obeyed," wrote Owen. "There are no penalties for not following what is outlined, and no introduction of laws. The document is just a wish list of things that governments want AI makers to really think about... And, it's not clear when or if legislation will arrive mandating what's in the document."
European Union member countries passed a draft of what the European Parliament called "the world's first comprehensive AI law" earlier this year with the AI Act. The law would require AI systems makers to publish summaries of the training material they use and prove that they will not generate illegal content. It would also bar companies from scraping biometric data from social media, which a U.S. AI company was found to be doing last year.
"AI tools are evolving rapidly," said Eisen on Monday, "and policymakers need to keep up."
Report Urges US-Russian Cooperation to Reduce Risk of Cyberattack Causing Nuclear War
"There is no more urgent task than understanding and mitigating the potential risks posed by the interaction of advancing cyber capabilities with nuclear weapons systems."
A report published Wednesday by a U.S. nonprofit group recommends cooperation between the United States and Russia aimed at reducing the threat of a nuclear war sparked by cyberattacks on nuclear weapon systems.
"In the modern nuclear age, there is no more urgent task than understanding and mitigating the potential risks posed by the interaction of advancing cyber capabilities and nuclear weapons systems," the Nuclear Threat Initiative (NTI) asserted in the report, entitled Reducing Cyber Risks to Nuclear Weapons: Proposals From a U.S.-Russia Expert Dialogue.
The publication "highlights the critical need for a global diplomatic approach to address growing cyber risks, including, where possible, through cooperation between the United States and Russia."
"Despite significant current geopolitical tensions, the United States and Russia have a mutual interest in avoiding the use of nuclear weapons and an obligation to work together to do so based on the understanding that a cyberattack on a nuclear weapons system could trigger catastrophic and unintended conflict and escalation," the group said in an implied reference to strained relations amid Russia's ongoing invasion of Ukraine.
NTI drew from talks between U.S. and Russian nonproliferation experts that took place in 2020 and 2021 prior to last year's invasion of Ukraine.
"While acknowledging the challenges posed by an already charged political environment, the dialogue emphasized the importance of maintaining cooperation between the United States and Russia on key nuclear security issues, the value of unilateral risk reduction actions, and the benefit of developing ideas for cooperative steps to be advanced when the political situation improves," the organization noted.
The talks yielded six recommendations for the U.S. and Russia to reduce cyber risks:
- Refrain from cyber interference in nuclear weapons and related systems, including nuclear command, control, communications, delivery, and warning systems;
- Evaluate options to minimize entanglement and/or integration of conventional and nuclear assets;
- Continue to improve the cybersecurity of their respective nuclear systems, including through unilateral "fail-safe" reviews;
- Increase transparency and expand communications during periods of increased tension;
- Adopt procedures to ensure that any cyber, information, or other operation involving information and communications technologies emanating from the United States or Russia with the potential to disrupt another nation's nuclear deterrence mission be approved at the same level as required for nuclear use; and
- Eliminate policies that threaten a nuclear weapons response to cyberattack.
"Today, the United States and Russia still possess roughly 90% of the world's nuclear weapons and are also among the most proficient and active developers and users of information and communications technology (ICT)," the report notes. "Nuclear weapons policies, however, have not kept up with these technological advancements."
"Meanwhile," the publication continues, "the ubiquity of advanced digital ICT tools, as well as their fulsome functional benefits, have led both countries' nuclear weapons enterprises to incorporate digital technologies into their nuclear weapons, warning, command, control, and communications systems."
"Both the United States and Russia should prioritize cyber-nuclear weapons risk-reduction as they pursue future bilateral and multilateral arms control, confidence-building, and transparency initiatives."
"With that modernization come vulnerabilities and openness to cyberattacks that could prompt dangerous miscalculations or accidents, leading to nuclear use," NTI stated, adding that "in the mid- to long-term, cybersecurity can be improved in the
context of ongoing nuclear weapons systems modernization."
"Mutual commitments can be codified through various political or legal formats," the report states. "Nuclear force modernization in each country presents an opportunity to clarify, isolate, and distinguish which systems are involved in nuclear deterrence missions from civilian infrastructure, critical national assets, and conventional warfighting systems."
"Modernization also provides opportunities to improve system resiliency and upgrade cybersecurity measures and practices," the publication adds. "Both the United States and Russia should prioritize cyber-nuclear weapons risk-reduction as they pursue future bilateral and multilateral arms control, confidence-building, and transparency initiatives."
The new report came a day after the U.S. Department of Defense published an unclassified summary of its 2023 Cyber Strategy, the first update in five years, in which the Pentagon stated it would "use cyberspace operations for the purpose of campaigning, undertaking actions to limit, frustrate, or disrupt adversaries' activities below the level of armed conflict and to achieve favorable security conditions."
The Pentagon added that it would "remain closely attuned to adversary perceptions and will manage the risk of unintended escalation."
Russia's war and U.S. support for Ukrainian efforts to oust invaders have heightened international calls for disarmament, with U.N. Secretary-General António Guterres recently warning that nuclear modernization and rising global mistrust is "a recipe for annihilation."