SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances."
While many experts agree that artificial intelligence holds tremendous potential for advancing medical science and human health, a group of international doctors and other specialists warned this week that AI "could pose an existential threat to humanity" and called for a moratorium on the development of such technology pending suitable regulation.
Responding to an open letter signed by thousands of experts calling for a pause on the development and deployment of advanced AI technology, pioneering inventor, futurist, and Singularity Group co-founder Ray Kurzweil—who did not sign the letter—said on Wednesday that "there are tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields."
However, an analysis by an international group of physicians and related experts published in the latest edition of the peer-reviewed journal BMJ Open Health warns that "while artificial intelligence offers promising solutions in healthcare, it also poses a number of threats to human health and well-being via social, political, economic, and security-related determinants of health."
\u201cHealth experts call for a halt to self-improving general AI development until regulation catches up.\n\n@GlobalHealthBMJ warn of harms to patients, data privacy issues, and a worsening of social and health inequalities, among other potential dangers.\n https://t.co/7bO970xL4b\u201d— Future of Life Institute (@Future of Life Institute) 1683722827
According to the study:
The risks associated with medicine and healthcare include the potential for AI errors to cause patient harm, issues with data privacy and security, and the use of AI in ways that will worsen social and health inequalities by either incorporating existing human biases and patterns of discrimination into automated algorithms or by deploying AI in ways that reinforce social inequalities in access to healthcare. One example of harm accentuated by incomplete or biased data was the development of an AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin, resulting in the undertreatment of their hypoxia.
Facial recognition systems have also been shown to be more likely to misclassify gender in subjects who are darker-skinned. It has also been shown that populations who are subject to discrimination are under-represented in datasets underlying AI solutions and may thus be denied the full benefits of AI in healthcare.
The publication's authors highlighted three distinct sets of threats associated with the misuse of AI. The first of these is "the ability of AI to rapidly clean, organize, and analyze massive data sets consisting of personal data, including images."
\u201cArda of @Identity2_0 on automation within healthcare\n\nVisit https://t.co/JzCTyuarxi to hear more from Arda and Savena of Identity 2.0 #digitaldehumanisation #autonomy #automation #ai #techforgood #teamhuman #healthcare\u201d— Stop Killer Robots (@Stop Killer Robots) 1683190800
This can be utilized "to manipulate behavior and subvert democracy," the authors explained, citing the role of AI in attempts to subvert the 2013 and 2017 Kenyan elections, the 2016 U.S. presidential race, and the 2017 French presidential contest.
"When combined with the rapidly improving ability to distort or misrepresent reality with deepfakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts," the analysis contends.
The second set of threats concerns the development and deployment of lethal autonomous weapons systems—often referred to as "killer robots"—that can select, engage, and destroy human targets without meaningful human control.
The third threat set involves the many millions of jobs that experts predict will be lost due to the widespread deployment of AI technology.
\u201cTom and Jerry creators predicted Job loss due to AI 60 years back.\n\nThis is likely the outcome when to add Boston dynamics + GPT powered Context + visual AI\n\n\ud83d\udc4980% of current jobs we are training our graduates for will not be there in next 10 years\n\n\ud83d\udc49Massive upskillings and\u2026\u201d— Ashish Dogra (@Ashish Dogra) 1683117455
"While there would be many benefits from ending work that is repetitive, dangerous, and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behavior, including harmful consumption of alcohol and illicit drugs, being overweight, and having lower self-rated quality of life and health and higher levels of depression and risk of suicide," the analysis states.
Furthermore, the paper warns of the threat of self-improving, general-purpose AI—or AGI—is "potentially all-encompassing":
We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and power—whether deliberately or not—in ways that could harm or subjugate humans—is real and has to be considered. If realized, the connection of AGI to the internet and the real world, including via vehicles, robots, weapons, and all the digital systems that increasingly run our societies, could well represent the "biggest event in human history."
"With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing," the authors stressed. "The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimize risk and harm and maximize benefit."
"Crucially, as with other technologies, preventing or minimizing the threats posed by AI will require international agreement and cooperation, and the avoidance of a mutually destructive AI 'arms race,'" the analysis stresses. "It will also require decision-making that is free of conflicts of interest and protected from the lobbying of powerful actors with a vested interest."
"Crucially, as with other technologies, preventing or minimizing the threats posed by AI will require international agreement and cooperation."
"If AI is to ever fulfill its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances," the authors concluded.
The new analysis comes a week after the White House unveiled a plan meant to promote "responsible American innovation in artificial intelligence."
On Wednesday, Data for Progress published a survey showing that more than half of U.S. voters—including 52% of Democrats, 57% of Independents, and 58% of Republicans—believe the United States "should slow down AI progress."
\u201cNEW POLL: Voters are concerned about ChatGPT, and 62% of voters \u2014 including majorities of Democrats, Independents, and Republicans \u2014 support creating a federal agency to regulate standards for the development and use of AI systems.\n\nhttps://t.co/AkmL5givjZ\u201d— Data for Progress (@Data for Progress) 1683729812
According to the survey, 62% of voters also support the creation of a federal agency to regulate the development and deployment of AI technology.
"President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies," Public Citizen's Robert Weissman argued.
As the White House on Thursday unveiled a plan meant to promote "responsible American innovation in artificial intelligence," a leading U.S. consumer advocate added his voice to the growing number of experts calling for a moratorium on the development and deployment of advanced AI technology.
"Today's announcement from the White House is a useful step forward, but much more is needed to address the threats of runaway corporate AI," Robert Weissman, president of the consumer advocacy group Public Citizen, said in a statement.
"But we also need more aggressive measures," Weissman asserted. "President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies, to remain in effect until there is a robust regulatory framework in place to address generative AI's enormous risks."
\u201cAt this point, Big Tech needs to be saved from itself. \n\nIt makes no sense for We the People to just sit by and hope their competitive arms race on generative AI works out.\n\nThe US govt must impose a moratorium on new generative AI technologies.\n\nhttps://t.co/L2TuAkDkGk\u201d— Robert Weissman (@Robert Weissman) 1683213243
The White House says its AI plan builds on steps the Biden administration has taken "to promote responsible innovation."
"These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year," the administration said.
The White House plan includes $140 million in National Science Foundation funding for seven new national AI research institutes—there are already 25 such facilities—that "catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good."
The new plan also includes "an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems."
Representatives of some of those companies including Google, Microsoft, Anthropic, and OpenAI—creator of the popular ChatGPT chatbot—met with Vice President Kamala Harris and other administration officials at the White House on Thursday. According toThe New York Times, President Joe Biden "briefly" dropped in on the meeting.
\u201cThis is a big deal: The @WhiteHouse will be issuing guidance on the use of AI systems by the government.\n\nThis, along with everything else they announced today, must be centered on the #AIBillOfRights and developed through meaningful community engagement. https://t.co/CDxamaxWEm\u201d— The Leadership Conference (@The Leadership Conference) 1683212998
"AI is one of today's most powerful technologies, with the potential to improve people's lives and tackle some of society's biggest challenges. At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy," Harris said in a statement.
"The private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products," she added.
\u201cIt strikes me that this meeting would be much more honest & productive with at least one critical #AI expert in attendance. https://t.co/JjdLB6z9wi\u201d— Elizabeth M. Renieris (@Elizabeth M. Renieris) 1683201372
Thursday's White House meeting and plan come amid mounting concerns over the potential dangers posed by artificial intelligence on a range of issues, including military applications, life-and-death healthcare decisions, and impacts on the labor force.
In late March, tech leaders and researchers led an open letter signed by more than 27,000 experts, scholars, and others urging "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
Noting that AI developers are "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control," the letter asks:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?
"Such decisions must not be delegated to unelected tech leaders," the signers asserted. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
\u201cso what should we do about this?\n\nI think the first step is that we have to slow down research for now\n\nit's near impossible to uninvent something, and we have to tread very lightly when playing with this kind of power\n\nI'm not alone in this, by the way\nhttps://t.co/l2eAA1FOAf\u201d— Freya Holm\u00e9r (@Freya Holm\u00e9r) 1683069630
Last month, Public Citizen argued that "until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause."
"These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," the group said in a report. "However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment."
According to the annual AI Index Report published last month by the Stanford Institute for Human-Centered Artificial Intelligence, nearly three-quarters of researchers believe artificial intelligence "could soon lead to revolutionary social change," while 36% worry that AI decisions "could cause nuclear-level catastrophe."