Jul 18, 2018
Thousands of artificial intelligence (AI) experts and developers have signed a pledge vowing to "neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons," and imploring governments worldwide to work together to "create a future with strong international norms, regulations, and laws" barring so-called killer robots.
"We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody."
--Anthony Aguirre,
UC-Santa Cruz
More than 160 companies and groups from three dozen countries and 2,400 individuals from 90 countries are backing the pledge, which was developed by the Boston-based Future of Life Institute (FLI) and unveiled Wednesday during the annual International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, Sweden.
"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," declared FLI president and MIT professor Max Tegmark. "AI has huge potential to help the world--if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way."
As Anthony Aguirre, a professor at the University of California-Santa Cruz and pledge signatory, told CNN, "We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody."
Signatory Yoshua Bengio, an AI expert at the Montreal Institute for Learning Algorithms, explained that the pledge has the potential to sway public opinion by shaming developers of killer robots, also referred to called lethal autonomous weapons systems.
"This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the U.S. did not sign the treaty banning land mines," Bengio pointed out in an interview with the Guardian. "American companies have stopped building land mines."
Lucy Suchman, a professor at England's Lancaster University, emphasized the importance of AI researchers staying involved with how their inventions are used, noting that as a developer she would, "first, commit to tracking the subsequent uses of my technologies and speaking out against their application to automating target recognition and, second, refuse to participate in either advising or directly helping to incorporate the technology into an autonomous weapon system."
Other high-profile supporters of the pledge include SpaceX and Tesla Motors CEO Elon Musk; Skype founder Jaan Tallinn; Jeffrey Dean, Google's lead of research and machine intelligence; and Demis Hassabis, Shane Legg, and Mustafa Suleyman, the co-founders of DeepMind.
As AI technology has continued to advance, the United Nations has convened a group of governmental experts to address mounting concerns raised by human rights organizations, advocacy groups, military leaders, lawmakers, and tech experts--many who, for years, have demanded a global ban on killer robots.
In recent years, tech experts have used IJCAI as an opportunity to pressure world leaders to outlaw autonomous weapons which, as the new pledge warns, "could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems." Without a ban on such weaponry, they "could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage."
Join Us: News for people demanding a better world
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
Thousands of artificial intelligence (AI) experts and developers have signed a pledge vowing to "neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons," and imploring governments worldwide to work together to "create a future with strong international norms, regulations, and laws" barring so-called killer robots.
"We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody."
--Anthony Aguirre,
UC-Santa Cruz
More than 160 companies and groups from three dozen countries and 2,400 individuals from 90 countries are backing the pledge, which was developed by the Boston-based Future of Life Institute (FLI) and unveiled Wednesday during the annual International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, Sweden.
"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," declared FLI president and MIT professor Max Tegmark. "AI has huge potential to help the world--if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way."
As Anthony Aguirre, a professor at the University of California-Santa Cruz and pledge signatory, told CNN, "We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody."
Signatory Yoshua Bengio, an AI expert at the Montreal Institute for Learning Algorithms, explained that the pledge has the potential to sway public opinion by shaming developers of killer robots, also referred to called lethal autonomous weapons systems.
"This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the U.S. did not sign the treaty banning land mines," Bengio pointed out in an interview with the Guardian. "American companies have stopped building land mines."
Lucy Suchman, a professor at England's Lancaster University, emphasized the importance of AI researchers staying involved with how their inventions are used, noting that as a developer she would, "first, commit to tracking the subsequent uses of my technologies and speaking out against their application to automating target recognition and, second, refuse to participate in either advising or directly helping to incorporate the technology into an autonomous weapon system."
Other high-profile supporters of the pledge include SpaceX and Tesla Motors CEO Elon Musk; Skype founder Jaan Tallinn; Jeffrey Dean, Google's lead of research and machine intelligence; and Demis Hassabis, Shane Legg, and Mustafa Suleyman, the co-founders of DeepMind.
As AI technology has continued to advance, the United Nations has convened a group of governmental experts to address mounting concerns raised by human rights organizations, advocacy groups, military leaders, lawmakers, and tech experts--many who, for years, have demanded a global ban on killer robots.
In recent years, tech experts have used IJCAI as an opportunity to pressure world leaders to outlaw autonomous weapons which, as the new pledge warns, "could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems." Without a ban on such weaponry, they "could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage."
Thousands of artificial intelligence (AI) experts and developers have signed a pledge vowing to "neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons," and imploring governments worldwide to work together to "create a future with strong international norms, regulations, and laws" barring so-called killer robots.
"We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody."
--Anthony Aguirre,
UC-Santa Cruz
More than 160 companies and groups from three dozen countries and 2,400 individuals from 90 countries are backing the pledge, which was developed by the Boston-based Future of Life Institute (FLI) and unveiled Wednesday during the annual International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, Sweden.
"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," declared FLI president and MIT professor Max Tegmark. "AI has huge potential to help the world--if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way."
As Anthony Aguirre, a professor at the University of California-Santa Cruz and pledge signatory, told CNN, "We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody."
Signatory Yoshua Bengio, an AI expert at the Montreal Institute for Learning Algorithms, explained that the pledge has the potential to sway public opinion by shaming developers of killer robots, also referred to called lethal autonomous weapons systems.
"This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the U.S. did not sign the treaty banning land mines," Bengio pointed out in an interview with the Guardian. "American companies have stopped building land mines."
Lucy Suchman, a professor at England's Lancaster University, emphasized the importance of AI researchers staying involved with how their inventions are used, noting that as a developer she would, "first, commit to tracking the subsequent uses of my technologies and speaking out against their application to automating target recognition and, second, refuse to participate in either advising or directly helping to incorporate the technology into an autonomous weapon system."
Other high-profile supporters of the pledge include SpaceX and Tesla Motors CEO Elon Musk; Skype founder Jaan Tallinn; Jeffrey Dean, Google's lead of research and machine intelligence; and Demis Hassabis, Shane Legg, and Mustafa Suleyman, the co-founders of DeepMind.
As AI technology has continued to advance, the United Nations has convened a group of governmental experts to address mounting concerns raised by human rights organizations, advocacy groups, military leaders, lawmakers, and tech experts--many who, for years, have demanded a global ban on killer robots.
In recent years, tech experts have used IJCAI as an opportunity to pressure world leaders to outlaw autonomous weapons which, as the new pledge warns, "could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems." Without a ban on such weaponry, they "could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage."
We've had enough. The 1% own and operate the corporate media. They are doing everything they can to defend the status quo, squash dissent and protect the wealthy and the powerful. The Common Dreams media model is different. We cover the news that matters to the 99%. Our mission? To inform. To inspire. To ignite change for the common good. How? Nonprofit. Independent. Reader-supported. Free to read. Free to republish. Free to share. With no advertising. No paywalls. No selling of your data. Thousands of small donations fund our newsroom and allow us to continue publishing. Can you chip in? We can't do it without you. Thank you.