SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
The media outlets claim the company violated copyright laws.
The Intercept, Raw Story, and Alternet joined forces on Wednesday to sue OpenAI for using copyrighted content to train its generative artificial intelligence tool ChatGPT.
The law firm Loevy + Loevy is representing the publications, and it has filed the lawsuit in the Southern District of New York. The firm claims OpenAI violated the Digital Millennium Copyright Act (DMCA) by using copyrighted content from news organizations to train ChatGPT.
"Had OpenAI trained ChatGPT using these works as they were published, including author, title, and copyright information, ChatGPT may have learned to respect third-party copyrights, or at least inform ChatGPT users that it was providing responses that were based on the copyrighted works of others. Instead, OpenAI removed that information from its ChatGPT training sets, in violation of the DMCA," the firm said in a statement.
NEWS: @RawStory is suing @OpenAI, creator of #ChatGPT.
“I think it's time for tech companies to be proactive in compensating publishers for their work,” Raw Story CEO @JohnByrnester told @corbinbolies of @TheDailyBeasthttps://t.co/dVX1q1qsvA
— Raw Story (@RawStory) February 28, 2024
OpenAI is facing multiple lawsuits over its use of copyrighted material, including from comedian Sarah Silverman and The New York Times. The Times lawsuit also references violations of the DMCA. OpenAI recently claimed the Times "hacked" ChatGPT to get it to reproduce its copyrighted content.
Publications like the The Associated Press have formed partnerships with OpenAI where they license their work to the company, rather than suing them over the use of copyrighted content. According to the AI-based text analysis company Copyleaks, approximately 60% of the content generated by ChatGPT-3.5 is plagiarized.
OpenAI argues its actions fall under "fair use." In 2016, the U.S. Supreme Court let a lower court ruling stand that said Google had not violated copyright laws by digitizing millions of books, so OpenAI may have a shot at winning with that kind of argument. It remains to be seen if any of the lawsuits against the company will make their way to the Supreme Court.
"Developers like OpenAI have garnered billions in investment and revenue because of AI products fundamentally created with and trained on copyright-protected material," said Loevy + Loevy partner Matt Topic, who represents the news organizations in the suits."The Digital Millennium Copyright Act prohibits the removal of author, title, and copyright notice when there is reason to know it would conceal or facilitate copyright infringement, and unlike traditional copyright infringement claims, it does not require creators to incur the copyright registration fees that often make traditional copyright infringement suits cost prohibitive given the massive scale of OpenAI's infringement."
Although the growing momentum and debate on AI governance is welcomed and urgently needed, the key question for 2024 is whether these discussions will generate concrete commitments and focus on the most important present-day AI risks.
The year 2023 marked a new era of “AI hype,” rapidly steering policymakers toward discussions on the safety and regulation of new artificial intelligence, or AI, technologies. The feverish year in tech started with the launch of ChatGPT in late 2022 and ended with a landmark agreement on the E.U. AI Act being reached.
While the final text is still being ironed out in technical meetings over the coming weeks, early signs indicate the Western world’s first “AI rulebook” goes someway to protecting people from the harms of AI but still falls short in a number of crucial areas, failing to ensure human rights protections especially for the most marginalized. This came soon after the U.K. Government hosted an inaugural AI Safety Summit in November 2023, where global leaders, key industry players, and select civil society groups gathered to discuss the risks of AI.
Although the growing momentum and debate on AI governance is welcomed and urgently needed, the key question for 2024 is whether these discussions will generate concrete commitments and focus on the most important present-day AI risks, and critically whether it will translate into further substantive action in other jurisdictions.
As we enter 2024, now is the time to not only ensure that AI systems are rights-respecting by design, but also to guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and are centered within these discussions.
While AI developments do present new opportunities and benefits, we must not ignore the documented dangers posed by AI tools when they are used as a means of societal control, mass surveillance, and discrimination. All too often, AI systems are trained on massive amounts of private and public data—data which reflects societal injustices, often leading to biased outcomes and exacerbating inequalities. From predictive policing tools, to automated systems used in public sector decision-making to determine who can access healthcare and social assistance, to monitoring the movement of migrants and refugees, AI has flagrantly and consistently undermined the human rights of the most marginalized in society. Other forms of AI, such as fraud detection algorithms, have also disproportionately impacted ethnic minorities, who have endured devastating financial problems as Amnesty International has already documented, while facial recognition technology has been used by the police and security forces to target racialized communities and entrench Israel’s system of apartheid.
So, what makes regulation of AI complex and challenging? First, there is the vague nature of the term AI itself, making efforts to regulate this technology more cumbersome. There is no widespread consensus on the definition of AI because the term does not refer to a singular technology and rather encapsulates a myriad technological applications and methods. The use of AI systems in many different domains across the public and private sector, means a large number of varied stakeholders are involved in its development and deployment, meaning such systems are a product of labor, data, software, and financial inputs and any regulation must grapple with upstream and downstream harms. Further, these systems cannot be strictly considered as hardware or software, but rather their impact comes down to the context in which they are developed and implemented and regulation must take this into account.
Alongside the E.U. legislative process, the U.K., U.S., and others, have set out their distinct roadmaps and approach to identifying the key risks AI technologies present, and how they intend to mitigate these. Whilst there are many complexities of these legislative processes, this should not delay any efforts to protect people from the present and future harms of AI, and there are crucial elements that we, at Amnesty, know any proposed regulatory approach must contain. Regulation must be legally binding and center the already documented harms to people subject to these systems. Commitments and principles on the “responsible” development and use of AI—the core of the current pro-innovation regulatory framework being pursued by the U.K.—do not offer an adequate protection against the risks of emerging technology and must be put on statutory footing.
Similarly, any regulation must include broader accountability mechanisms over and above technical evaluations that are being pushed by industry. While these may be a useful string within any regulatory toolkit’s bow, particularly in testing for algorithmic bias, bans and prohibitions cannot be off the table for systems fundamentally incompatible with human rights, no matter how accurate or technically efficacious they purport to be.
Others must learn from the E.U. process and ensure there are not loopholes for public and private sector players to circumvent regulatory obligations, and removing any exemptions for AI used within national security or law enforcement is critical to achieving this. It is also important that where future regulation limits or prohibits the use of certain AI systems in one jurisdiction, no loopholes or regulatory gaps allow the same systems to be exported to other countries where they could be used to harm the human rights of marginalized groups. This remains a glaring gap in the U.K., U.S., and E.U. approaches, as they fail to take into account the global power imbalances of these technologies, especially their impact on communities in the Global Majority whose voices are not represented in these discussions. There have already been documented cases of outsourced workers being exploited in Kenya and Pakistan by companies developing AI tools.
As we enter 2024, now is the time to not only ensure that AI systems are rights-respecting by design, but also to guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and are centered within these discussions. More than lip service by lawmakers, we need binding regulation that holds companies and other key industry players to account—and ensures that profits do not come at the expense of human rights protections. International, regional, and national governance efforts must complement and catalyze each other, and global discussions must not come at the expense of meaningful national regulation or binding regulatory standards—these are not mutually exclusive. This is the level at which accountability is served—we must learn from past attempts to regulate tech, which means ensuring robust mechanisms are introduced to allow victims of AI-inflicted rights violations to seek justice.
"Given the use of AI systems in the targeting of civilians in Gaza, it's a notable moment to make the decision to remove the words," warned one policy analyst.
ChatGPT maker OpenAI this week quietly removed language from its usage policy that prohibited military use of its technology, a move with serious implications given the increase use of artificial intelligence on battlefields including Gaza.
ChatGPT is a free tool that lets users enter prompts to receive text or images generated by AI.The Intercept's Sam Biddle reported Friday that prior to Wednesday, OpenAI's permissible uses page banned "activity that has high risk of physical harm, including," specifically, "weapons development" and "military and warfare."
Although the company's
new policy stipulates that users should not harm human beings or "develop or use weapons," experts said the removal of the "military and warfare" language leaves open the door for lucrative contracts with U.S. and other militaries.
"Given the use of AI systems in the
targeting of civilians in Gaza, it's a notable moment to make the decision to remove the words 'military and warfare' from OpenAI's permissible use policy," Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission, told The Intercept.
"The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement," she added.
An OpenAI spokesperson told Common Dreams in an email that:
Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with [the Defense Advanced Research Projects Agency] to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under "military" in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.
As AI advances, so does its weaponization. Experts warn that AI applications including lethal autonomous weapons systems, commonly called "killer robots," could pose a potentially existential threat to humanity that underscores the imperative of arms control measures to slow the pace of weaponization.
That's the goal of nuclear weapons legislation introduced last year in the U.S. Congress. The bipartisan Block Nuclear Launch by Autonomous Artificial Intelligence Act—introduced by Sen. Ed Markey (D-Mass.) and Reps. Ted Lieu (D-Calif.), Don Beyer (D-Va.), and Ken Buck (R-Colo.)—asserts that "any decision to launch a nuclear weapon should not be made" by AI.