dangers of ai
WHO Warns Untested AI Tech Could 'Cause Harm to Patients'
The "growing experimental use" of ChatGPT and similar tools in medical contexts should be halted until pressing concerns are addressed and "clear evidence of benefit" is demonstrated, said the United Nations health agency.
The ongoing failure to adequately regulate artificial intelligence-generated large language model tools is jeopardizing human well-being, the World Health Organization said Tuesday.
The WHO lamented that precautions typically taken with regard to any new technology are not being applied consistently when it comes to large language models (LLMs), which use AI to analyze data, create content, and answer questions—often incorrectly. Accordingly, the United Nations agency called for sufficient risk assessments to be conducted and corresponding safeguards implemented before LLMs become entrenched in healthcare.
The "meteoric public diffusion and growing experimental use" of LLMs—including ChatGPT, Bard, Bert, and other platforms that "imitate understanding, processing, and producing human communication"—in medical settings "is generating significant excitement around the potential to support people's health needs," the WHO noted. However, "it is imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people's health and reduce inequity."
"Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and thereby undermine or delay the potential long-term benefits and uses of such technologies around the world," the agency warned.
Specific concerns identified by the WHO include:
- The data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity, and inclusiveness;
- LLMs generate responses that can appear authoritative and plausible to an end user; however, these responses may be completely incorrect or contain serious errors, especially for health-related responses;
- LLMs may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response; and
- LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video content that is difficult for the public to differentiate from reliable health content.
"While committed to harnessing new technologies, including AI and digital health to improve human health, WHO recommends that policymakers ensure patient safety and protection while technology firms work to commercialize LLMs," the agency added. "WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine healthcare and medicine—whether by individuals, care providers, or health system administrators and policymakers."
The agency reiterated "the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing, and deploying AI for health."
The WHO expressed its worries just days after an international group of doctors warned in the peer-reviewed journal BMJ Open Health that AI "could pose an existential threat to humanity" and demanded a moratorium on the development of such technology pending robust regulation.
"While artificial intelligence offers promising solutions in healthcare, it also poses a number of threats to human health and well-being," the physicians and related experts wrote. "With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing."
Fears of the negative implications of AI in healthcare and other arenas appear to be well-founded. As Common Dreams reported in March, progressives urged the Biden administration to intervene after an investigation showed that Medicare Advantage insurers' use of unregulated AI tools to determine when to end payments for patients' treatments has resulted in the premature termination of coverage for vulnerable seniors.
"Robots should not be making life-or-death decisions," health justice advocate Ady Barkan wrote on social media at the time, as he shared a petition imploring the White House to stop #DeathByAI.
Warning of AI Threat to 'Human Existence,' Health Experts Urge Halt to Unregulated Rollout
"If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances."
While many experts agree that artificial intelligence holds tremendous potential for advancing medical science and human health, a group of international doctors and other specialists warned this week that AI "could pose an existential threat to humanity" and called for a moratorium on the development of such technology pending suitable regulation.
Responding to an open letter signed by thousands of experts calling for a pause on the development and deployment of advanced AI technology, pioneering inventor, futurist, and Singularity Group co-founder Ray Kurzweil—who did not sign the letter—said on Wednesday that "there are tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields."
However, an analysis by an international group of physicians and related experts published in the latest edition of the peer-reviewed journal BMJ Open Health warns that "while artificial intelligence offers promising solutions in healthcare, it also poses a number of threats to human health and well-being via social, political, economic, and security-related determinants of health."
\u201cHealth experts call for a halt to self-improving general AI development until regulation catches up.\n\n@GlobalHealthBMJ warn of harms to patients, data privacy issues, and a worsening of social and health inequalities, among other potential dangers.\n https://t.co/7bO970xL4b\u201d— Future of Life Institute (@Future of Life Institute) 1683722827
According to the study:
The risks associated with medicine and healthcare include the potential for AI errors to cause patient harm, issues with data privacy and security, and the use of AI in ways that will worsen social and health inequalities by either incorporating existing human biases and patterns of discrimination into automated algorithms or by deploying AI in ways that reinforce social inequalities in access to healthcare. One example of harm accentuated by incomplete or biased data was the development of an AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin, resulting in the undertreatment of their hypoxia.
Facial recognition systems have also been shown to be more likely to misclassify gender in subjects who are darker-skinned. It has also been shown that populations who are subject to discrimination are under-represented in datasets underlying AI solutions and may thus be denied the full benefits of AI in healthcare.
The publication's authors highlighted three distinct sets of threats associated with the misuse of AI. The first of these is "the ability of AI to rapidly clean, organize, and analyze massive data sets consisting of personal data, including images."
\u201cArda of @Identity2_0 on automation within healthcare\n\nVisit https://t.co/JzCTyuarxi to hear more from Arda and Savena of Identity 2.0 #digitaldehumanisation #autonomy #automation #ai #techforgood #teamhuman #healthcare\u201d— Stop Killer Robots (@Stop Killer Robots) 1683190800
This can be utilized "to manipulate behavior and subvert democracy," the authors explained, citing the role of AI in attempts to subvert the 2013 and 2017 Kenyan elections, the 2016 U.S. presidential race, and the 2017 French presidential contest.
"When combined with the rapidly improving ability to distort or misrepresent reality with deepfakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts," the analysis contends.
The second set of threats concerns the development and deployment of lethal autonomous weapons systems—often referred to as "killer robots"—that can select, engage, and destroy human targets without meaningful human control.
The third threat set involves the many millions of jobs that experts predict will be lost due to the widespread deployment of AI technology.
\u201cTom and Jerry creators predicted Job loss due to AI 60 years back.\n\nThis is likely the outcome when to add Boston dynamics + GPT powered Context + visual AI\n\n\ud83d\udc4980% of current jobs we are training our graduates for will not be there in next 10 years\n\n\ud83d\udc49Massive upskillings and\u2026\u201d— Ashish Dogra (@Ashish Dogra) 1683117455
"While there would be many benefits from ending work that is repetitive, dangerous, and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behavior, including harmful consumption of alcohol and illicit drugs, being overweight, and having lower self-rated quality of life and health and higher levels of depression and risk of suicide," the analysis states.
Furthermore, the paper warns of the threat of self-improving, general-purpose AI—or AGI—is "potentially all-encompassing":
We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and power—whether deliberately or not—in ways that could harm or subjugate humans—is real and has to be considered. If realized, the connection of AGI to the internet and the real world, including via vehicles, robots, weapons, and all the digital systems that increasingly run our societies, could well represent the "biggest event in human history."
"With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing," the authors stressed. "The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimize risk and harm and maximize benefit."
"Crucially, as with other technologies, preventing or minimizing the threats posed by AI will require international agreement and cooperation, and the avoidance of a mutually destructive AI 'arms race,'" the analysis stresses. "It will also require decision-making that is free of conflicts of interest and protected from the lobbying of powerful actors with a vested interest."
"Crucially, as with other technologies, preventing or minimizing the threats posed by AI will require international agreement and cooperation."
"If AI is to ever fulfill its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances," the authors concluded.
The new analysis comes a week after the White House unveiled a plan meant to promote "responsible American innovation in artificial intelligence."
On Wednesday, Data for Progress published a survey showing that more than half of U.S. voters—including 52% of Democrats, 57% of Independents, and 58% of Republicans—believe the United States "should slow down AI progress."
\u201cNEW POLL: Voters are concerned about ChatGPT, and 62% of voters \u2014 including majorities of Democrats, Independents, and Republicans \u2014 support creating a federal agency to regulate standards for the development and use of AI systems.\n\nhttps://t.co/AkmL5givjZ\u201d— Data for Progress (@Data for Progress) 1683729812
According to the survey, 62% of voters also support the creation of a federal agency to regulate the development and deployment of AI technology.