SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"Google will probably now work on deploying technology directly that can kill people," said one former ethical AI staffer at the tech giant.
Weeks into U.S. President Donald Trump's second term, Google on Tuesday removed from its Responsible AI principles a commitment to not use artificial intelligence to develop technologies that could cause "overall harm," including weapons and surveillance—walking back a pledge that employees pushed for seven years ago as they reminded the company of its motto at the time: "Don't be evil."
That maxim was deleted from the company's code of conduct shortly after thousands of employees demanded Google end its collaboration with the Pentagon on potential drone technology in 2018, and this week officials at the Silicon Valley giant announced they can no longer promise they'll refraining from AI weapons development.
James Manyika, senior vice president for research, technology, and society, and Demis Hassabis, CEO of the company's AI research lab DeepMind, wrote in a blog post on progress in "Responsible AI" that in "an increasingly complex geopolitical landscape... democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights."
"And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," they said.
Until Tuesday, Google pledged that "applications we will not pursue" with AI included weapons, surveillance, technologies that "cause or are likely to cause overall harm," and uses that violate international law and human rights.
"Is this as terrifying as it sounds?" asked one journalist and author as the mention of those applications disappeared from the campany's AI Principles page, where it had been included as recently as last week.
Margaret Mitchell, who previously co-led Google's ethical AI team, toldBloomberg that the removal of the principles "is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people."
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public."
The company's updated AI Principles page says it will implement "appropriate human oversight" to align its work with "widely accepted principles of international law and human rights" and that it will use testing and monitoring "to mitigate unintended or harmful outcomes and avoid unfair bias."
But with Google aligning itself with the Trump administration, human rights advocate Sarah Leah Whitson of Democracy for the Arab World Now called the company a "corporate war machine" following Tuesday's announcement.
Google donated $1 million to his inaugural committee along with other tech giants and sent CEO Sundar Pichai to Trump's inauguration, where he sat next to the president's top ally in the industry, Elon Musk.
Since Trump won the election in November, tech companies have also distanced themselves from previous pledges to strive for diversity, equity, and inclusion in their hiring and workplace practices, as Trump has directly targeted DEI programs in the federal government.
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public," Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA, toldWired on Tuesday.
At Google, said Koul, there is still "long-standing employee sentiment that the company should not be in the business of war."
Political revenge. Mass deportations. Project 2025. Unfathomable corruption. Attacks on Social Security, Medicare, and Medicaid. Pardons for insurrectionists. An all-out assault on democracy. Republicans in Congress are scrambling to give Trump broad new powers to strip the tax-exempt status of any nonprofit he doesn’t like by declaring it a “terrorist-supporting organization.” Trump has already begun filing lawsuits against news outlets that criticize him. At Common Dreams, we won’t back down, but we must get ready for whatever Trump and his thugs throw at us. As a people-powered nonprofit news outlet, we cover issues the corporate media never will, but we can only continue with our readers’ support. By donating today, please help us fight the dangers of a second Trump presidency. |
Weeks into U.S. President Donald Trump's second term, Google on Tuesday removed from its Responsible AI principles a commitment to not use artificial intelligence to develop technologies that could cause "overall harm," including weapons and surveillance—walking back a pledge that employees pushed for seven years ago as they reminded the company of its motto at the time: "Don't be evil."
That maxim was deleted from the company's code of conduct shortly after thousands of employees demanded Google end its collaboration with the Pentagon on potential drone technology in 2018, and this week officials at the Silicon Valley giant announced they can no longer promise they'll refraining from AI weapons development.
James Manyika, senior vice president for research, technology, and society, and Demis Hassabis, CEO of the company's AI research lab DeepMind, wrote in a blog post on progress in "Responsible AI" that in "an increasingly complex geopolitical landscape... democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights."
"And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," they said.
Until Tuesday, Google pledged that "applications we will not pursue" with AI included weapons, surveillance, technologies that "cause or are likely to cause overall harm," and uses that violate international law and human rights.
"Is this as terrifying as it sounds?" asked one journalist and author as the mention of those applications disappeared from the campany's AI Principles page, where it had been included as recently as last week.
Margaret Mitchell, who previously co-led Google's ethical AI team, toldBloomberg that the removal of the principles "is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people."
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public."
The company's updated AI Principles page says it will implement "appropriate human oversight" to align its work with "widely accepted principles of international law and human rights" and that it will use testing and monitoring "to mitigate unintended or harmful outcomes and avoid unfair bias."
But with Google aligning itself with the Trump administration, human rights advocate Sarah Leah Whitson of Democracy for the Arab World Now called the company a "corporate war machine" following Tuesday's announcement.
Google donated $1 million to his inaugural committee along with other tech giants and sent CEO Sundar Pichai to Trump's inauguration, where he sat next to the president's top ally in the industry, Elon Musk.
Since Trump won the election in November, tech companies have also distanced themselves from previous pledges to strive for diversity, equity, and inclusion in their hiring and workplace practices, as Trump has directly targeted DEI programs in the federal government.
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public," Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA, toldWired on Tuesday.
At Google, said Koul, there is still "long-standing employee sentiment that the company should not be in the business of war."
Weeks into U.S. President Donald Trump's second term, Google on Tuesday removed from its Responsible AI principles a commitment to not use artificial intelligence to develop technologies that could cause "overall harm," including weapons and surveillance—walking back a pledge that employees pushed for seven years ago as they reminded the company of its motto at the time: "Don't be evil."
That maxim was deleted from the company's code of conduct shortly after thousands of employees demanded Google end its collaboration with the Pentagon on potential drone technology in 2018, and this week officials at the Silicon Valley giant announced they can no longer promise they'll refraining from AI weapons development.
James Manyika, senior vice president for research, technology, and society, and Demis Hassabis, CEO of the company's AI research lab DeepMind, wrote in a blog post on progress in "Responsible AI" that in "an increasingly complex geopolitical landscape... democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights."
"And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," they said.
Until Tuesday, Google pledged that "applications we will not pursue" with AI included weapons, surveillance, technologies that "cause or are likely to cause overall harm," and uses that violate international law and human rights.
"Is this as terrifying as it sounds?" asked one journalist and author as the mention of those applications disappeared from the campany's AI Principles page, where it had been included as recently as last week.
Margaret Mitchell, who previously co-led Google's ethical AI team, toldBloomberg that the removal of the principles "is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people."
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public."
The company's updated AI Principles page says it will implement "appropriate human oversight" to align its work with "widely accepted principles of international law and human rights" and that it will use testing and monitoring "to mitigate unintended or harmful outcomes and avoid unfair bias."
But with Google aligning itself with the Trump administration, human rights advocate Sarah Leah Whitson of Democracy for the Arab World Now called the company a "corporate war machine" following Tuesday's announcement.
Google donated $1 million to his inaugural committee along with other tech giants and sent CEO Sundar Pichai to Trump's inauguration, where he sat next to the president's top ally in the industry, Elon Musk.
Since Trump won the election in November, tech companies have also distanced themselves from previous pledges to strive for diversity, equity, and inclusion in their hiring and workplace practices, as Trump has directly targeted DEI programs in the federal government.
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public," Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA, toldWired on Tuesday.
At Google, said Koul, there is still "long-standing employee sentiment that the company should not be in the business of war."