Under the terms of the agreement, the nonprofit OpenAI Foundation will hold a $130 billion stake in the new for-profit company, called OpenAI Group PBC, which the firm says will make it "one of the best resourced philanthropic organizations ever."
A source told the Times that OpenAI CEO Sam Altman "does not have a significant stake in the new for-profit company." Microsoft, OpenAI's biggest investor, will hold a $135 billion stake in OpenAI Group PBC, while the remaining shares will be held by "current and former employees and other investors," writes the Times.
Robert Weissman, co-president of Public Citizen, immediately blasted the move and warned that reassurances about the nonprofit OpenAI Foundation maintaining "control" of the project were completely empty.
"Since the November 2023 coup at OpenAI, there is no evidence whatsoever of the nonprofit exerting control over the for-profit, and only evidence of the reverse," he argued, referencing a shakeup at the company nearly two years ago, which saw Altman removed and then restored to his leadership role.
Weissman warned that OpenAI has consistently "rushed dangerous new technologies to market, in advance of competitors and without adequate safety tests and protocols."
As evidence of this, Weissman pointed to Altman's announcement that ChatGPT would soon allow for erotica for verified adults, as well as OpenAI's recent introduction of its Sora 2 AI video platform that he said "threatens to destroy social norms of truth."
"This arrangement will help entrench unaccountable leadership at OpenAI For-profit," he said. "Based on the past two years, we can expect OpenAI Foundation to leave dormant its power (and obligation) to exert control over OpenAI For-profit."
Weissman concluded that the deal to make OpenAI into a for-profit company "should not be allowed to stand" and encouraged the state attorneys general in Delaware and California to "exert their authority to dissolve OpenAI Nonprofit and reallocate its resources to new organizations in the charitable sector."
Weissman's warning about OpenAI becoming a reckless and out-of-control for-profit behemoth was echoed on Tuesday by Steven Adler, an AI researcher and former product safety leader at OpenAI.
Drawing on his experience at the firm, Adler wrote an editorial for The New York Times in which he questioned OpenAI's commitment to mitigating mental health dangers caused or exacerbated by its flagship chatbot.
"I believe OpenAI wants its products to be safe to use," Adler explained. "But it also has a history of paying too little attention to established risks. This spring, the company released—and after backlash, withdrew—an egregiously 'sycophantic' version of ChatGPT that would reinforce users' extreme delusions, like being targeted by the FBI. OpenAI later admitted to having no sycophancy tests as part of the process for deploying new models, even though those risks have been well known in AI circles since at least 2023."
Adler knocked the company for its overall lack of transparency, and he noted that both it and Google DeepMind seem to have "broken commitments related to publishing safety-testing results before a major product introduction."
Adler chalked up these problems to developing AI in a highly competitive for-profit market in which new capabilities are pushed out before safety risks are properly assessed.
"If OpenAI and its competitors are to be trusted with building the seismic technologies for which they aim, they must demonstrate they are trustworthy in managing risks today," he concluded.