In a report published Monday, a leading human rights group calls for international political action to prohibit and regulate so-called "killer robots"—autonomous weapons systems that select targets based on inputs from sensors rather than from humans—and examines them in the context of six core principles in international human rights law.
In some cases, the report argues, an autonomous weapons system may simply be incompatible with a given human rights principle or obligation.
The report, co-published by Human Rights Watch and Harvard Law School's International Human Rights Clinic, comes just ahead of the first United Nations General Assembly meeting on autonomous weapons systems next month. Back in 2017, dozens of artificial intelligence and robotics experts published a letter urging the U.N. to ban the development and use of killer robots. As drone warfare has grown, those calls have continued.
"To avoid a future of automated killing, governments should seize every opportunity to work toward the goal of adopting a global treaty on autonomous weapons systems," said the author behind the report, Bonnie Docherty, a senior arms adviser at Human Rights Watch and a lecturer on law at Harvard Law School's International Human Rights Clinic, in a statement on Monday.
According to the report, which includes recommendations on a potential international treaty, the call for negotiations to adopt "a legally binding instrument to prohibit and regulate autonomous weapons systems" is supported by at least 129 countries.
Drones relying on an autonomous targeting system have been used by Ukraine to hit Russian targets during the war between the two countries, The New York Timesreported last year.
In 2023, the Pentagon announced a program, known as the Replicator initiative, which involves a push to build thousands of autonomous drones. The program is part of the U.S. Defense Department's plan to counter China. In November, the watchdog group Public Citizen alleged that Pentagon officials have not been clear about whether the drones in the Replicator project would be used to kill.
A senior Navy admiral recently toldBloomberg that the program is "alive and well" under the Department of Defense's new leadership following U.S. President Donald Trump's return to the White House.
Docherty warned that the impact of killer robots will stretch beyond the traditional battlefield. "The use of autonomous weapons systems will not be limited to war, but will extend to law enforcement operations, border control, and other circumstances, raising serious concerns under international human rights law," she said in the statement
When it comes to the right to peaceful assembly under human rights law, which is important in the context of law enforcement exercising use force, "autonomous weapons systems would be incompatible with this right," according to the report.
Killer robots pose a threat to peaceful assembly because they "would lack human judgment and could not be pre-programmed or trained to address every situation," meaning they "would find it challenging to draw the line between peaceful and violent protesters."
Also, "the use or threat of use of autonomous weapons systems, especially in the hands of abusive governments, could strike fear among protesters and thus cause a chilling effect on free expression and peaceful assembly," per the report.
Killer robots would also contravene the principle of human dignity, according to the report, which establishes that all humans have inherent worth that is "universal and inviolable."
"The dignity critique is not focused on the systems generating the wrong outcomes," the report states. "Even if autonomous weapons systems could feasibly make no errors in outcomes—something that is extremely unlikely—the human dignity concerns remain, necessitating prohibitions and regulations of such systems."
"Autonomous weapon systems cannot be programmed to give value to human life, do not possess emotions like compassion that can generate restraint to violence, and would rely on processes that dehumanize individuals by making life-and-death decisions based on software and data points," Docherty added.
In total, the report considers the right to life; the right to peaceful assembly; the principle of human dignity; the principle of nondiscrimination; the right to privacy; and the right to remedy.
The report also lists cases where it's more ambiguous whether autonomous weapons systems would violate a certain right.
The right to privacy, for example, protects individuals from "arbitrary or unlawful" interferences in their personal life. According to the report, "The development and use of autonomous weapons systems could violate the right because, if they or any of their component systems are based on AI technology, their development, testing, training, and use would likely require mass surveillance."