SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"As much as the White House can do on its own, those measures are no substitute for agency regulation and legislative action," said one leading consumer advocate.
While welcoming U.S. President Joe Biden's executive order aimed at setting standards for artificial intelligence safety and security, digital rights campaigners on Monday also stressed it's only a first step and that federal regulation and congressional action are needed if the directive is to be effective.
"AI is all around us," Biden said before signing the order. "To realize the promise of AI and avoid the risk, we need to govern this technology."
To that end, the president's executive order—which he says must be backed by congressional legislation—requires "developers of the most powerful AI systems" to inform the federal government of safety test results and other key data. The National Institute of Standards and Technology will also be tasked with drafting AI safety and security standards.
"It's hard to say that this document, on its own, represents much progress."
The order also aims to prevent AI from engineering dangerous biological materials "by developing strong new standards for biological synthesis screening," while directing the U.S. Department of Commerce to "develop guidance for content authentication and watermarking to clearly label AI-generated content," an effort to protect consumer from fraud and deception.
Caitlin Seeley George, campaigns and managing director at the advocacy group Fight for the Future, called Biden's order "a positive step."
"However, it's hard to say that this document, on its own, represents much progress," she asserted.
Seeley George continued:
Agencies like the [Federal Trade Commission] have already taken some action to rein in abuses of AI, and this executive order could supercharge such efforts, unlocking the federal government's ability to put critical guardrails in place to address harmful impacts of AI. But there's also the possibility that agencies do the bare minimum, a choice that would render this executive order toothless and waste another year of our lives while vulnerable people continue to lose housing and job opportunities, experience increased surveillance at school and in public, and be unjustly targeted by law enforcement, all due to biased and discriminatory AI.
"It's impossible to ignore the gaping hole in this order when it comes to law enforcement agencies' use of AI," said Seeley George. "Some of the most harmful uses of AI are currently being perpetrated by law enforcement, from predictive policing algorithms and pre-trial assessments to biometric surveillance systems like facial recognition."
Noting that AI systems used by police "deliver discriminatory outcomes, particularly for Black people and other people of color," Seeley George added that "we cannot stress enough that if the Biden administration fails to put real limits on how law enforcement uses AI, their effort will ultimately fail in its goal of addressing the biggest threats that AI poses to our civil rights."
Maria Langholz, director of communications at Demand Progress, said in a statement that the advocacy group applauds Biden "for his leadership in advancing the national conversation on comprehensive AI regulation."
Langholz continued:
Given the long history of Big Tech companies like Facebook, Apple, Google, Microsoft, and Amazon abusing their monopoly power in areas from cloud computing to worker surveillance, Americans should be deeply concerned about corporate consolidation of AI technologies. We have already seen the tech giants begin to sweep up small innovators, and we expect that this will continue in the absence of a major intervention.
"In the coming months, Demand Progress will work to ensure the Biden administration and Congress' emerging AI frameworks have teeth to meaningfully rein in Big Tech corporate consolidation, to thoughtfully monitor and restrain military and law enforcement applications, and to protect against undue surveillance and consumer privacy violations," she added.
At the consumer advocacy group Public Citizen, president Robert Weissman said in a statement that "today's executive order is a vital step by the Biden administration to begin the long process of regulating rapidly advancing AI technology—but it's only a first step."
The order, Weissman continued, "builds on the White House's Blueprint for an AI Bill of Rights and the administration's important move last week to ensure its trade policy does not preempt AI and technology-related policymaking."
The White House said Monday that it will take additional action including,
"As much as the White House can do on its own, those measures are no substitute for agency regulation and legislative action," Weissman added. "Preventing the foreseeable and unforeseeable threats from AI requires agencies and Congress take the baton from the White House and act now to shape the future of AI—rather than letting a handful of corporations determine our future, at potentially great peril."
Following Biden's signing of the executive order, Senate Majority Leader Chuck Schumer (D-N.Y.) said that "executive orders are limited, and the president and I agree we need legislation."
Schumer—who has hosted two recent AI forums—added that a bipartisan working group would meet with Biden Tuesday "to move forward on AI legislatively" with "urgency" and "humility."
"This is about the hardest thing I've attempted to undertake legislatively," he said.
The executive order comes during preparations for a global AI safety summit in the United Kingdom next month, ahead of which two dozen experts warned that policymakers must act now to prevent "societal-scale" damage from the technology.
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
While welcoming U.S. President Joe Biden's executive order aimed at setting standards for artificial intelligence safety and security, digital rights campaigners on Monday also stressed it's only a first step and that federal regulation and congressional action are needed if the directive is to be effective.
"AI is all around us," Biden said before signing the order. "To realize the promise of AI and avoid the risk, we need to govern this technology."
To that end, the president's executive order—which he says must be backed by congressional legislation—requires "developers of the most powerful AI systems" to inform the federal government of safety test results and other key data. The National Institute of Standards and Technology will also be tasked with drafting AI safety and security standards.
"It's hard to say that this document, on its own, represents much progress."
The order also aims to prevent AI from engineering dangerous biological materials "by developing strong new standards for biological synthesis screening," while directing the U.S. Department of Commerce to "develop guidance for content authentication and watermarking to clearly label AI-generated content," an effort to protect consumer from fraud and deception.
Caitlin Seeley George, campaigns and managing director at the advocacy group Fight for the Future, called Biden's order "a positive step."
"However, it's hard to say that this document, on its own, represents much progress," she asserted.
Seeley George continued:
Agencies like the [Federal Trade Commission] have already taken some action to rein in abuses of AI, and this executive order could supercharge such efforts, unlocking the federal government's ability to put critical guardrails in place to address harmful impacts of AI. But there's also the possibility that agencies do the bare minimum, a choice that would render this executive order toothless and waste another year of our lives while vulnerable people continue to lose housing and job opportunities, experience increased surveillance at school and in public, and be unjustly targeted by law enforcement, all due to biased and discriminatory AI.
"It's impossible to ignore the gaping hole in this order when it comes to law enforcement agencies' use of AI," said Seeley George. "Some of the most harmful uses of AI are currently being perpetrated by law enforcement, from predictive policing algorithms and pre-trial assessments to biometric surveillance systems like facial recognition."
Noting that AI systems used by police "deliver discriminatory outcomes, particularly for Black people and other people of color," Seeley George added that "we cannot stress enough that if the Biden administration fails to put real limits on how law enforcement uses AI, their effort will ultimately fail in its goal of addressing the biggest threats that AI poses to our civil rights."
Maria Langholz, director of communications at Demand Progress, said in a statement that the advocacy group applauds Biden "for his leadership in advancing the national conversation on comprehensive AI regulation."
Langholz continued:
Given the long history of Big Tech companies like Facebook, Apple, Google, Microsoft, and Amazon abusing their monopoly power in areas from cloud computing to worker surveillance, Americans should be deeply concerned about corporate consolidation of AI technologies. We have already seen the tech giants begin to sweep up small innovators, and we expect that this will continue in the absence of a major intervention.
"In the coming months, Demand Progress will work to ensure the Biden administration and Congress' emerging AI frameworks have teeth to meaningfully rein in Big Tech corporate consolidation, to thoughtfully monitor and restrain military and law enforcement applications, and to protect against undue surveillance and consumer privacy violations," she added.
At the consumer advocacy group Public Citizen, president Robert Weissman said in a statement that "today's executive order is a vital step by the Biden administration to begin the long process of regulating rapidly advancing AI technology—but it's only a first step."
The order, Weissman continued, "builds on the White House's Blueprint for an AI Bill of Rights and the administration's important move last week to ensure its trade policy does not preempt AI and technology-related policymaking."
The White House said Monday that it will take additional action including,
"As much as the White House can do on its own, those measures are no substitute for agency regulation and legislative action," Weissman added. "Preventing the foreseeable and unforeseeable threats from AI requires agencies and Congress take the baton from the White House and act now to shape the future of AI—rather than letting a handful of corporations determine our future, at potentially great peril."
Following Biden's signing of the executive order, Senate Majority Leader Chuck Schumer (D-N.Y.) said that "executive orders are limited, and the president and I agree we need legislation."
Schumer—who has hosted two recent AI forums—added that a bipartisan working group would meet with Biden Tuesday "to move forward on AI legislatively" with "urgency" and "humility."
"This is about the hardest thing I've attempted to undertake legislatively," he said.
The executive order comes during preparations for a global AI safety summit in the United Kingdom next month, ahead of which two dozen experts warned that policymakers must act now to prevent "societal-scale" damage from the technology.
While welcoming U.S. President Joe Biden's executive order aimed at setting standards for artificial intelligence safety and security, digital rights campaigners on Monday also stressed it's only a first step and that federal regulation and congressional action are needed if the directive is to be effective.
"AI is all around us," Biden said before signing the order. "To realize the promise of AI and avoid the risk, we need to govern this technology."
To that end, the president's executive order—which he says must be backed by congressional legislation—requires "developers of the most powerful AI systems" to inform the federal government of safety test results and other key data. The National Institute of Standards and Technology will also be tasked with drafting AI safety and security standards.
"It's hard to say that this document, on its own, represents much progress."
The order also aims to prevent AI from engineering dangerous biological materials "by developing strong new standards for biological synthesis screening," while directing the U.S. Department of Commerce to "develop guidance for content authentication and watermarking to clearly label AI-generated content," an effort to protect consumer from fraud and deception.
Caitlin Seeley George, campaigns and managing director at the advocacy group Fight for the Future, called Biden's order "a positive step."
"However, it's hard to say that this document, on its own, represents much progress," she asserted.
Seeley George continued:
Agencies like the [Federal Trade Commission] have already taken some action to rein in abuses of AI, and this executive order could supercharge such efforts, unlocking the federal government's ability to put critical guardrails in place to address harmful impacts of AI. But there's also the possibility that agencies do the bare minimum, a choice that would render this executive order toothless and waste another year of our lives while vulnerable people continue to lose housing and job opportunities, experience increased surveillance at school and in public, and be unjustly targeted by law enforcement, all due to biased and discriminatory AI.
"It's impossible to ignore the gaping hole in this order when it comes to law enforcement agencies' use of AI," said Seeley George. "Some of the most harmful uses of AI are currently being perpetrated by law enforcement, from predictive policing algorithms and pre-trial assessments to biometric surveillance systems like facial recognition."
Noting that AI systems used by police "deliver discriminatory outcomes, particularly for Black people and other people of color," Seeley George added that "we cannot stress enough that if the Biden administration fails to put real limits on how law enforcement uses AI, their effort will ultimately fail in its goal of addressing the biggest threats that AI poses to our civil rights."
Maria Langholz, director of communications at Demand Progress, said in a statement that the advocacy group applauds Biden "for his leadership in advancing the national conversation on comprehensive AI regulation."
Langholz continued:
Given the long history of Big Tech companies like Facebook, Apple, Google, Microsoft, and Amazon abusing their monopoly power in areas from cloud computing to worker surveillance, Americans should be deeply concerned about corporate consolidation of AI technologies. We have already seen the tech giants begin to sweep up small innovators, and we expect that this will continue in the absence of a major intervention.
"In the coming months, Demand Progress will work to ensure the Biden administration and Congress' emerging AI frameworks have teeth to meaningfully rein in Big Tech corporate consolidation, to thoughtfully monitor and restrain military and law enforcement applications, and to protect against undue surveillance and consumer privacy violations," she added.
At the consumer advocacy group Public Citizen, president Robert Weissman said in a statement that "today's executive order is a vital step by the Biden administration to begin the long process of regulating rapidly advancing AI technology—but it's only a first step."
The order, Weissman continued, "builds on the White House's Blueprint for an AI Bill of Rights and the administration's important move last week to ensure its trade policy does not preempt AI and technology-related policymaking."
The White House said Monday that it will take additional action including,
"As much as the White House can do on its own, those measures are no substitute for agency regulation and legislative action," Weissman added. "Preventing the foreseeable and unforeseeable threats from AI requires agencies and Congress take the baton from the White House and act now to shape the future of AI—rather than letting a handful of corporations determine our future, at potentially great peril."
Following Biden's signing of the executive order, Senate Majority Leader Chuck Schumer (D-N.Y.) said that "executive orders are limited, and the president and I agree we need legislation."
Schumer—who has hosted two recent AI forums—added that a bipartisan working group would meet with Biden Tuesday "to move forward on AI legislatively" with "urgency" and "humility."
"This is about the hardest thing I've attempted to undertake legislatively," he said.
The executive order comes during preparations for a global AI safety summit in the United Kingdom next month, ahead of which two dozen experts warned that policymakers must act now to prevent "societal-scale" damage from the technology.