SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
Noting the ubiquity of artificial intelligence in modern life, the United Nations' top human rights official on Wednesday called for a moratorium on the sale and use of AI systems that imperil human rights until sufficient safeguards against potential abuse are implemented.
"Action is needed now to put human rights guardrails on the use of AI, for the good of all of us."
--Michelle Bachelet, OHCHR
"Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times," U.N. High Commissioner for Human Rights Michelle Bachelet said. "But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people's human rights."
The former socialist president of Chile added that AI "now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online."
Bachelet's remarks came at a Council of Europe hearing on the Pegasus scandal--in which the Israeli firm NSO Group's spyware was used to target activists, journalists, and politicians worldwide, sparking calls for a global moratorium on the sale and transfer of surveillance technology.
Her comments also came as the Office of the United Nations High Commissioner for Human Rights (OHCHR) published a report analyzing AI's impacts on privacy and other rights.
\u201cInferences, predictions and monitoring performed by #AI, incl. seeking insights into human behaviour patterns, raise serious questions \u23e9 Biased datasets can lead to discriminatory decisions: these risks are most acute for already marginalized groups.\n\ud83d\udc49 https://t.co/VmmR75aKzD\u201d— UN Human Rights (@UN Human Rights) 1631712600
According to OHCHR:
The report looks at how states and businesses alike have often rushed to incorporate AI applications, failing to carry out due diligence. There have already been numerous cases of people being treated unjustly because of AI, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition. The report details how AI systems rely on large data sets, with information about individuals collected, shared, merged, and analyzed in multiple and often opaque ways. The data used to inform and guide AI systems can be faulty, discriminatory, out-of-date, or irrelevant. Long-term storage of data also poses particular risks, as data could in the future be exploited in as yet unknown ways.
"The complexity of the data environment, algorithms, and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors, are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society," the report states.
\u201c"Like nuclear or biological weapons, technology like this has such an enormous potential for harm that it cannot be effectively regulated, it must be banned." \n\nI spoke to @abcnews about the new UN report on artificial intelligence and human rights \n\nhttps://t.co/ALIuX21OTJ\u201d— Evan Greer is on Mastodon (@Evan Greer is on Mastodon) 1631735936
Evan Greer, director of the digital rights group Fight for the Future, said that the new report "echoes the growing consensus among technology and human rights experts around the world" that "artificial intelligence-powered surveillance systems like facial recognition pose an existential threat to the future human liberty."
"Like nuclear or biological weapons, technology like this has such an enormous potential for harm that it cannot be effectively regulated, it must be banned," she continued. "Facial recognition and other discriminatory uses of artificial intelligence can do immense harm whether they're deployed by governments or private entities like corporations."
"We agree with the U.N. report's conclusion," added Greer. "There should be an immediate, worldwide moratorium on the sale of facial recognition surveillance technology and other harmful AI systems."
\u201cGovernments should implement a moratorium on the sale and transfer of #surveillance technology until compliance with human rights standards can be guaranteed.\n\nNo excuses for inaction. It's time for a pause \u23f8\ufe0f\u201d— Michelle Bachelet (@Michelle Bachelet) 1631712480
"Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared, and used is one of the most urgent human rights questions we face," Bachelet asserted Wednesday. "We cannot afford to continue playing catch-up regarding AI--allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact."
"The power of AI to serve people is undeniable, but so is AI's ability to feed human rights violations at an enormous scale with virtually no visibility," she added. "Action is needed now to put human rights guardrails on the use of AI, for the good of all of us."
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
Noting the ubiquity of artificial intelligence in modern life, the United Nations' top human rights official on Wednesday called for a moratorium on the sale and use of AI systems that imperil human rights until sufficient safeguards against potential abuse are implemented.
"Action is needed now to put human rights guardrails on the use of AI, for the good of all of us."
--Michelle Bachelet, OHCHR
"Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times," U.N. High Commissioner for Human Rights Michelle Bachelet said. "But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people's human rights."
The former socialist president of Chile added that AI "now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online."
Bachelet's remarks came at a Council of Europe hearing on the Pegasus scandal--in which the Israeli firm NSO Group's spyware was used to target activists, journalists, and politicians worldwide, sparking calls for a global moratorium on the sale and transfer of surveillance technology.
Her comments also came as the Office of the United Nations High Commissioner for Human Rights (OHCHR) published a report analyzing AI's impacts on privacy and other rights.
\u201cInferences, predictions and monitoring performed by #AI, incl. seeking insights into human behaviour patterns, raise serious questions \u23e9 Biased datasets can lead to discriminatory decisions: these risks are most acute for already marginalized groups.\n\ud83d\udc49 https://t.co/VmmR75aKzD\u201d— UN Human Rights (@UN Human Rights) 1631712600
According to OHCHR:
The report looks at how states and businesses alike have often rushed to incorporate AI applications, failing to carry out due diligence. There have already been numerous cases of people being treated unjustly because of AI, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition. The report details how AI systems rely on large data sets, with information about individuals collected, shared, merged, and analyzed in multiple and often opaque ways. The data used to inform and guide AI systems can be faulty, discriminatory, out-of-date, or irrelevant. Long-term storage of data also poses particular risks, as data could in the future be exploited in as yet unknown ways.
"The complexity of the data environment, algorithms, and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors, are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society," the report states.
\u201c"Like nuclear or biological weapons, technology like this has such an enormous potential for harm that it cannot be effectively regulated, it must be banned." \n\nI spoke to @abcnews about the new UN report on artificial intelligence and human rights \n\nhttps://t.co/ALIuX21OTJ\u201d— Evan Greer is on Mastodon (@Evan Greer is on Mastodon) 1631735936
Evan Greer, director of the digital rights group Fight for the Future, said that the new report "echoes the growing consensus among technology and human rights experts around the world" that "artificial intelligence-powered surveillance systems like facial recognition pose an existential threat to the future human liberty."
"Like nuclear or biological weapons, technology like this has such an enormous potential for harm that it cannot be effectively regulated, it must be banned," she continued. "Facial recognition and other discriminatory uses of artificial intelligence can do immense harm whether they're deployed by governments or private entities like corporations."
"We agree with the U.N. report's conclusion," added Greer. "There should be an immediate, worldwide moratorium on the sale of facial recognition surveillance technology and other harmful AI systems."
\u201cGovernments should implement a moratorium on the sale and transfer of #surveillance technology until compliance with human rights standards can be guaranteed.\n\nNo excuses for inaction. It's time for a pause \u23f8\ufe0f\u201d— Michelle Bachelet (@Michelle Bachelet) 1631712480
"Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared, and used is one of the most urgent human rights questions we face," Bachelet asserted Wednesday. "We cannot afford to continue playing catch-up regarding AI--allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact."
"The power of AI to serve people is undeniable, but so is AI's ability to feed human rights violations at an enormous scale with virtually no visibility," she added. "Action is needed now to put human rights guardrails on the use of AI, for the good of all of us."
Noting the ubiquity of artificial intelligence in modern life, the United Nations' top human rights official on Wednesday called for a moratorium on the sale and use of AI systems that imperil human rights until sufficient safeguards against potential abuse are implemented.
"Action is needed now to put human rights guardrails on the use of AI, for the good of all of us."
--Michelle Bachelet, OHCHR
"Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times," U.N. High Commissioner for Human Rights Michelle Bachelet said. "But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people's human rights."
The former socialist president of Chile added that AI "now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online."
Bachelet's remarks came at a Council of Europe hearing on the Pegasus scandal--in which the Israeli firm NSO Group's spyware was used to target activists, journalists, and politicians worldwide, sparking calls for a global moratorium on the sale and transfer of surveillance technology.
Her comments also came as the Office of the United Nations High Commissioner for Human Rights (OHCHR) published a report analyzing AI's impacts on privacy and other rights.
\u201cInferences, predictions and monitoring performed by #AI, incl. seeking insights into human behaviour patterns, raise serious questions \u23e9 Biased datasets can lead to discriminatory decisions: these risks are most acute for already marginalized groups.\n\ud83d\udc49 https://t.co/VmmR75aKzD\u201d— UN Human Rights (@UN Human Rights) 1631712600
According to OHCHR:
The report looks at how states and businesses alike have often rushed to incorporate AI applications, failing to carry out due diligence. There have already been numerous cases of people being treated unjustly because of AI, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition. The report details how AI systems rely on large data sets, with information about individuals collected, shared, merged, and analyzed in multiple and often opaque ways. The data used to inform and guide AI systems can be faulty, discriminatory, out-of-date, or irrelevant. Long-term storage of data also poses particular risks, as data could in the future be exploited in as yet unknown ways.
"The complexity of the data environment, algorithms, and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors, are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society," the report states.
\u201c"Like nuclear or biological weapons, technology like this has such an enormous potential for harm that it cannot be effectively regulated, it must be banned." \n\nI spoke to @abcnews about the new UN report on artificial intelligence and human rights \n\nhttps://t.co/ALIuX21OTJ\u201d— Evan Greer is on Mastodon (@Evan Greer is on Mastodon) 1631735936
Evan Greer, director of the digital rights group Fight for the Future, said that the new report "echoes the growing consensus among technology and human rights experts around the world" that "artificial intelligence-powered surveillance systems like facial recognition pose an existential threat to the future human liberty."
"Like nuclear or biological weapons, technology like this has such an enormous potential for harm that it cannot be effectively regulated, it must be banned," she continued. "Facial recognition and other discriminatory uses of artificial intelligence can do immense harm whether they're deployed by governments or private entities like corporations."
"We agree with the U.N. report's conclusion," added Greer. "There should be an immediate, worldwide moratorium on the sale of facial recognition surveillance technology and other harmful AI systems."
\u201cGovernments should implement a moratorium on the sale and transfer of #surveillance technology until compliance with human rights standards can be guaranteed.\n\nNo excuses for inaction. It's time for a pause \u23f8\ufe0f\u201d— Michelle Bachelet (@Michelle Bachelet) 1631712480
"Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared, and used is one of the most urgent human rights questions we face," Bachelet asserted Wednesday. "We cannot afford to continue playing catch-up regarding AI--allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact."
"The power of AI to serve people is undeniable, but so is AI's ability to feed human rights violations at an enormous scale with virtually no visibility," she added. "Action is needed now to put human rights guardrails on the use of AI, for the good of all of us."