SUBSCRIBE TO OUR FREE NEWSLETTER

SUBSCRIBE TO OUR FREE NEWSLETTER

Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

* indicates required
5
#000000
#FFFFFF
aritifical intelligence in medicine

A surgical team of the future operates with the help of virtual reality and artificial intelligence.

(Photo: OneForAll/Getty Images)

Warning of AI Threat to 'Human Existence,' Health Experts Urge Halt to Unregulated Rollout

"If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances."

While many experts agree that artificial intelligence holds tremendous potential for advancing medical science and human health, a group of international doctors and other specialists warned this week that AI "could pose an existential threat to humanity" and called for a moratorium on the development of such technology pending suitable regulation.

Responding to an open letter signed by thousands of experts calling for a pause on the development and deployment of advanced AI technology, pioneering inventor, futurist, and Singularity Group co-founder Ray Kurzweil—who did not sign the letter—said on Wednesday that "there are tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields."

However, an analysis by an international group of physicians and related experts published in the latest edition of the peer-reviewed journal BMJ Open Health warns that "while artificial intelligence offers promising solutions in healthcare, it also poses a number of threats to human health and well-being via social, political, economic, and security-related determinants of health."

According to the study:

The risks associated with medicine and healthcare include the potential for AI errors to cause patient harm, issues with data privacy and security, and the use of AI in ways that will worsen social and health inequalities by either incorporating existing human biases and patterns of discrimination into automated algorithms or by deploying AI in ways that reinforce social inequalities in access to healthcare. One example of harm accentuated by incomplete or biased data was the development of an AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin, resulting in the undertreatment of their hypoxia.

Facial recognition systems have also been shown to be more likely to misclassify gender in subjects who are darker-skinned. It has also been shown that populations who are subject to discrimination are under-represented in datasets underlying AI solutions and may thus be denied the full benefits of AI in healthcare.

The publication's authors highlighted three distinct sets of threats associated with the misuse of AI. The first of these is "the ability of AI to rapidly clean, organize, and analyze massive data sets consisting of personal data, including images."

This can be utilized "to manipulate behavior and subvert democracy," the authors explained, citing the role of AI in attempts to subvert the 2013 and 2017 Kenyan elections, the 2016 U.S. presidential race, and the 2017 French presidential contest.

"When combined with the rapidly improving ability to distort or misrepresent reality with deepfakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts," the analysis contends.

The second set of threats concerns the development and deployment of lethal autonomous weapons systems—often referred to as "killer robots"—that can select, engage, and destroy human targets without meaningful human control.

The third threat set involves the many millions of jobs that experts predict will be lost due to the widespread deployment of AI technology.

"While there would be many benefits from ending work that is repetitive, dangerous, and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behavior, including harmful consumption of alcohol and illicit drugs, being overweight, and having lower self-rated quality of life and health and higher levels of depression and risk of suicide," the analysis states.

Furthermore, the paper warns of the threat of self-improving, general-purpose AI—or AGI—is "potentially all-encompassing":

We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and power—whether deliberately or not—in ways that could harm or subjugate humans—is real and has to be considered. If realized, the connection of AGI to the internet and the real world, including via vehicles, robots, weapons, and all the digital systems that increasingly run our societies, could well represent the "biggest event in human history."

"With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing," the authors stressed. "The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimize risk and harm and maximize benefit."

"Crucially, as with other technologies, preventing or minimizing the threats posed by AI will require international agreement and cooperation, and the avoidance of a mutually destructive AI 'arms race,'" the analysis stresses. "It will also require decision-making that is free of conflicts of interest and protected from the lobbying of powerful actors with a vested interest."

"Crucially, as with other technologies, preventing or minimizing the threats posed by AI will require international agreement and cooperation."

"If AI is to ever fulfill its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances," the authors concluded.

The new analysis comes a week after the White House unveiled a plan meant to promote "responsible American innovation in artificial intelligence."

On Wednesday, Data for Progress published a survey showing that more than half of U.S. voters—including 52% of Democrats, 57% of Independents, and 58% of Republicans—believe the United States "should slow down AI progress."

According to the survey, 62% of voters also support the creation of a federal agency to regulate the development and deployment of AI technology.

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.