For Orson Aguilar, a seasoned activist for economic justice, the news that OpenAI was dismantling its nonprofit roots felt like a warning shot. In October 2024, Silicon Valley press reports revealed that OpenAI, the world-famous maker of ChatGPT, was planning to “simplify” its unusual nonprofit structure and morph into a more conventional company.
Aguilar, who leads the Los Angeles–based nonprofit LatinoProsperity, feared the move would betray OpenAI’s founding mission to “benefit all of humanity” by empowering investors to reap unlimited private profits.
That moment spurred Aguilar to action. He began speed-dialling allies in California’s philanthropic and civil rights circles, determined to scrutinise OpenAI’s restructuring plan and its implications for the public interest.
Over the ensuing months, a broad coalition of more than 50 advocacy organisations coalesced: labour unions, community foundations, tech accountability groups. All rallied around a shared concern that OpenAI’s transformation from a nonprofit lab into a profit-driven corporate behemoth could set a dangerous precedent.
By January 2025, they were urging California’s attorney general to intervene, warning that billions in charitable assets (the ones intended for the public good) were at risk of being effectively privatised.
What had begun as a little-noticed corporate restructuring proposal was quickly snowballing into a high-stakes battle over the future of OpenAI, the governance of artificial intelligence, and the integrity of the nonprofit system itself.
From idealistic lab to tech titan
OpenAI’s origin story is steeped in idealism. The San Francisco research lab launched in late 2015 as a nonprofit venture backed by tech luminaries including Sam Altman and Elon Musk, who together pledged $1 billion to fund it. At the time, concerns were growing that AI development was dominated by a few big tech firms driven by profit.
OpenAI’s founders vowed a different path: build advanced AI in the service of all humanity and share the research openly. Its charter emphasised long-term social benefits over financial gain, even declaring that when conflict arises, the nonprofit mission would “take precedence” over any obligation to generate profit.
For several years, OpenAI operated like an AI think tank, publishing cutting-edge research and freely sharing its code. But as the race to develop powerful AI accelerated, the organisation faced a dilemma. Training world-class AI models required far more computing power (and money) than its initial philanthropic funding could support.
By 2019, OpenAI’s leadership made a controversial pivot. They set up a hybrid structure by creating a for-profit arm, OpenAI Global, under the umbrella of the original nonprofit, OpenAI. This allowed them to attract venture capital while ostensibly keeping the nonprofit’s oversight and mission intact.
This compromise introduced a novel “capped-profit” model. Investors could earn returns on their money, but those returns were capped at a certain multiple. After that point, the nonprofit would retain surplus value to fund its mission. The idea was to access billions in Silicon Valley capital without fully sacrificing OpenAI’s altruistic DNA.
In practice, the hybrid model allowed OpenAI to tap deep-pocketed backers. Microsoft alone poured in a reported $13 billion (across multiple funding rounds) for a share of OpenAI’s technology and profits.
By late 2023, OpenAI’s ChatGPT had become a global sensation, drawing over half a billion weekly users and solidifying OpenAI as a leading AI provider.
Yet the influx of capital came with mounting pressure to compete and monetise. OpenAI began behaving more like a Silicon Valley startup than a pure research outfit. It charged for API access, launched paid subscriptions, and curtailed its once-open research disclosures for “security and competitive” reasons. The tension between its nonprofit ideals and market ambitions grew harder to ignore.
The breaking point came in November 2023 with an extraordinary boardroom drama. OpenAI’s nonprofit board, tasked with ensuring the company stayed true to its mission, abruptly fired CEO Sam Altman, citing a “lack of consistent candour” in his communications.
The shock ouster of Altman, the charismatic figurehead of ChatGPT’s rise, sent the tech world into a tailspin, especially after it emerged that concerns about AI safety and OpenAI’s rapid pace might have been at issue.
Within five days, Altman was reinstated following an employee and investor uproar. Most of the board members who had ousted him resigned under pressure. The episode was a stark illustration of OpenAI’s governance quandary. The nonprofit oversight that reassured the public about OpenAI’s benevolent mission had become a wildcard factor in a company now valued like a $100 billion startup.
As Vox observed, the saga “made it clear that the nonprofit’s control of the for-profit could potentially have huge implications,” particularly for Microsoft, which had billions at stake. After that turmoil, OpenAI’s leadership grew even more convinced that its existing structure was unsustainable.
Shift to a public-benefit corporation
In late 2023, OpenAI started mapping out a plan to overhaul its corporate structure. The centrepiece would be converting OpenAI’s for-profit subsidiary into a new entity incorporated as a public benefit corporation (PBC). A PBC is a company that balances profit-making with a stated social mission.
Crucially, unlike the earlier capped-profit model, this change would remove the hard ceiling on investor returns, allowing venture backers to profit without limit. OpenAI’s nonprofit parent would likely either become a minority shareholder in the new PBC or receive a one-time payout. In other words, OpenAI was preparing to shed the very safeguards that once made it unique: the nonprofit’s veto power and the cap on profits, all in favour of a more typical corporate arrangement.
Developing advanced AI has become a capital-intensive arms race, with rivals like Google, Meta, Anthropic, and Elon Musk’s new startup xAI all vying for talent and computing resources. OpenAI’s leadership argues that staying ahead in AI requires access to far greater funding than a nonprofit model can provide. Under American law, nonprofits face strict limits on raising investment.
Neil Elan, a partner at a tech law firm, stated that “they can’t sell stock or offer returns.”
He explains that “equity is what drives a lot of these high-valuation models in Silicon Valley. Without the ability to issue lucrative shares, OpenAI feared it wouldn’t fully compete with Meta, Microsoft, and Google, which have access to a lot more resources… without comparable funding.”
Those worries crystallised as OpenAI negotiated a blockbuster financing deal earlier this year. In April 2025, OpenAI announced it had closed a record-setting $40 billion funding round led by Japan’s SoftBank. The deal pegged OpenAI’s valuation at an eye-popping $300 billion. The catch? Roughly 75% of that investment is contingent on OpenAI completing its structural revamp by the end of 2025.
The investment agreement allows SoftBank and other backers to withdraw up to $30 billion of the funding if OpenAI does not transition to the new PBC structure on schedule. The message was clear: to secure the full war chest needed for its aggressive AI roadmap, OpenAI had to cast off any structural quirks that made investors nervous.
And the investors were getting nervous. The November boardroom fracas had highlighted how OpenAI’s nonprofit oversight could unpredictably intervene in business decisions. This created a risk factor almost unheard of among tech unicorns.
According to a previously unreported letter from OpenAI’s lawyers to California regulators, “many potential investors in OpenAI’s recent funding rounds declined to invest due to its unusual governance structure.”
This contradicts earlier narratives that investors were lining up in droves.
Indeed, some of Silicon Valley’s biggest players baulked at OpenAI’s hybrid model after seeing Altman briefly dethroned by a nonprofit board. To calm the investor concerns, OpenAI’s leadership initiated plans soon after the Altman episode to remove nonprofit control and restructure as a profit-centric PBC. In effect, the startup’s meteoric success was forcing it to become more like a traditional corporation in order to keep raising capital at sky-high valuations.
Sam Altman himself has acknowledged that, in hindsight, OpenAI’s founders underestimated how costly and fast-moving the AI race would become.
He noted that if he could redo 2015, he might have structured OpenAI differently from the start. Other AI startups, Anthropic (founded by ex-OpenAI researchers) and xAI (founded by Musk), learnt from OpenAI’s example and launched as public benefit corporations from day one.
Now OpenAI is racing to catch up and give its backers the standard corporate framework and eventual stock market payday they expect. Investors are already eyeing a potential OpenAI IPO by 2027, which could turn early stakes by firms like Microsoft and SoftBank into massive profits.
The activist backlash
As OpenAI moved forward with its restructuring behind closed doors, Orson Aguilar and his allies mobilised a counteroffensive in public. Aguilar’s first call after digesting the news in October 2024 was to Fred Blackwell, CEO of the San Francisco Foundation.
Blackwell, a prominent figure in Bay Area philanthropy, immediately recognised echoes of a past fight. In the 1990s, he and other advocates had taken on a wave of nonprofit hospitals and insurers attempting to convert into for-profit companies.
As those healthcare nonprofits demutualised, some executives manoeuvred to boost payouts for themselves while hollowing out the charitable foundations that were meant to receive the nonprofits’ assets.
Consumer advocates in California, including Blackwell’s colleague Judith Bell, latched onto an obscure but powerful statute. Under state law, all the assets of a nonprofit are irrevocably dedicated to the public and belong to the people of the state forever, only to be used for charitable purposes.
The law grants California’s attorney general broad authority to approve or deny any conversion of a nonprofit’s assets and to ensure that the public is protected when a nonprofit becomes a private entity.
Bell and others built a coalition in the 1990s that pressured the California Attorney General to enforce those rules rigorously. The result was a series of agreements that preserved enormous charitable endowments even as hospitals transitioned into for-profit status.
In California alone, that activism helped create three of the state’s largest private foundations, including the $4 billion California Endowment, which was funded by assets spun out of former nonprofit hospitals.
Advocates estimate that they saved $15 billion in charitable funds from being diverted by businesses and their investors through those deals.
“You can protect the charitable assets and allow these companies to go forth in the for-profit world,” Bell says, reflecting on that chapter. In other words, compromise is possible. Companies can convert, but the public must get its fair share.
Seeing history repeat itself, Bell, Blackwell, and Aguilar decided to revive that old playbook. By late 2023, they had assembled a broad coalition of community and labour organisations. More than 50 groups, ranging from tech accountability nonprofits to unions like SEIU, expressed alarm at the idea of OpenAI’s charitable assets being diverted to private gain.
In January 2025, the coalition launched a public campaign and formally petitioned California Attorney General Rob Bonta to scrutinise OpenAI’s plans. Their request was direct: Do not approve this conversion without firm guarantees that OpenAI’s nonprofit assets (both tangible and intangible) will be fully valued and used for the public good going forward.
The coalition’s letter to Bonta, costeered by Aguilar, expressed doubt that OpenAI intended to comply with the spirit of the law. They accused OpenAI of skirting transparency and failing to detail how its nonprofit stake, which some estimate could be worth $20 to $30 billion given OpenAI’s valuation, would be protected.
Any scheme that values OpenAI’s nonprofit share at even a penny less than fair market value “would be unlawful,” Aguilar argues.
He adds that anything short of full independence for the nonprofit arm risks allowing commercial imperatives to override the charity’s purpose. In short, the activists want OpenAI’s nonprofit to remain firmly in control or receive a payout that reflects its foundational role in creating ChatGPT and other breakthroughs.
Ideally, they envision that endowment fuelling what could become one of the best-resourced nonprofits in history, a massive charitable foundation to fund AI for good, free from the influence of OpenAI’s new for-profit owners.
This grassroots pressure has already scored some symbolic wins. By March, as media coverage of the “OpenAI rebellion” intensified, the company’s leadership reached out to engage with the coalition. Aguilar, Blackwell, and others sat with OpenAI representatives in San Francisco for a tense meeting.
According to participants, OpenAI staff were eager to correct what they described as “misconceptions” about the restructuring and even asked for feedback on how the nonprofit’s mission might evolve in the future. But when Aguilar pressed for concrete answers on how much funding and independence the nonprofit would retain under the new plan, he says OpenAI’s emissaries deflected. Not long after, OpenAI announced it was creating a special advisory commission, a move widely seen as a response to the coalition’s campaign.
California’s attorney general enters the fray
California Attorney General Rob Bonta, who oversees nonprofit charities in the state, became a pivotal figure in this unfolding drama. Triggered by the January petition, Bonta’s office quietly opened an investigation into OpenAI’s plans and requested financial records from the company earlier this year.
While such probes are typically confidential, Bonta’s spokesperson publicly confirmed in the spring that the California Department of Justice was actively reviewing OpenAI’s restructuring and remained in continued conversations with the company.
OpenAI must convince Bonta that its conversion plan will not improperly discard any charitable trust obligations. The attorney general has the authority to block the deal or to impose conditions if it fails to meet legal requirements.
At the centre of this scrutiny is a distinctly Californian legal safeguard. When a nonprofit organisation, like OpenAI Inc., houses valuable assets or subsidiaries, those assets are considered charitable in perpetuity. The law allows nonprofits to convert or sell assets, but only if the public interest is fairly served and the charitable value is preserved.
Any windfall generated from, for instance, turning OpenAI’s research into a public stock offering should primarily benefit nonprofit coffers earmarked for public benefit. What activists fear is a repeat of past abuses: insiders or investors structuring deals that shortchange the nonprofit, and by extension the public, out of the true value of what was built under the nonprofit’s umbrella.
Critics say OpenAI’s original restructuring plan appeared to do exactly that. Under the version circulated in late 2023, OpenAI’s nonprofit parent would sell its majority control in exchange for a stake in the new PBC and certain licensing rights. However, it would no longer directly own the core technology it had developed.
Observers noted that the nonprofit appeared to be transforming into just another investor, sacrificing governance for funding, which effectively diminished its public-interest role.
“Nonprofit control over how AGI is developed and governed is so important to OpenAI’s mission that removing control would violate the special fiduciary duty owed to the nonprofit’s beneficiaries,” argued a group of prominent tech experts and legal scholars in an open letter to Bonta.
According to that letter, “The nonprofit’s beneficiaries are all of us, the general public, and no amount of payout could compensate for the loss of a direct role in shaping the future of one of the world’s most powerful AI labs.”
This expert letter, published on a site pointedly titled “Not for Private Gain”, was signed in April by more than 30 prominent AI voices, including pioneering researcher Dr. Geoffrey Hinton, leading AI ethicists Margaret Mitchell and Stuart Russell, and even several former OpenAI insiders.
They called on Bonta and the Delaware attorney general (since OpenAI is incorporated in Delaware) to intervene and prevent any restructuring plan that removes public oversight of OpenAI’s artificial general intelligence (AGI) research. The letter warned that eliminating the nonprofit’s control would gut essential governance mechanisms that keep OpenAI’s profit motives in check.
Legal firestorm and public reckoning
Pressure was also mounting from unexpected directions. In early 2024, Elon Musk, who had co-founded OpenAI but left in 2018, filed a lawsuit to halt the restructuring. He claimed that OpenAI was abandoning the charitable mission for which he had originally donated funds. Musk further alleged that his $100 million donation had been improperly used to help establish the for-profit arm.
Surprisingly, some of OpenAI’s competitors voiced support for Musk’s challenge. Meta, the parent company of Facebook, publicly backed the effort to stall the transformation. While critics noted that both Musk and Meta had their own competitive reasons for opposing OpenAI’s rise, their involvement added to the scrutiny surrounding the company’s motives.
Musk’s legal action did not immediately block OpenAI’s plans. A federal judge in California declined to grant a preliminary injunction in the spring of 2025. However, the lawsuit added complexity and drew public attention. OpenAI responded by countersuing Musk, accusing him of trying to undermine a rival out of self-interest.
Critics emphasised the obvious: many of OpenAI’s financial backers were traditional venture capitalists seeking substantial returns, not long-term philanthropists. When OpenAI transitions fully into a profit-maximising enterprise, skeptics fear it will become beholden to shareholder interests.
The consequences of this transition could extend far beyond OpenAI itself. If the company completes its conversion with minimal nonprofit influence, it might establish a precedent for other tech ventures. Future startups could adopt nonprofit language and structures to attract donations and goodwill, only to switch to for-profit status once they reach commercial success. This possibility alarms nonprofit advocates, who warn that it could erode public trust in charitable innovation.
On the other hand, if regulators force significant concessions, such as creating a large, independent foundation or embedding real public-interest oversight, it could reaffirm the public’s rightful stake in high-impact technologies. That kind of intervention would send a powerful signal that phrases like “for the benefit of humanity” must be backed by accountability and tangible structures.
Ultimately, the key question concerns governance. Who will lead the development of the next generation of artificial intelligence? Will private investment prevail, or will public interest play a significant role? If OpenAI’s nonprofit organisation becomes a passive shareholder, there is a risk that the values of safety, equity, and long-term benefit may be overshadowed by a focus on quarterly earnings.
However, if California’s attorney general and other regulators take action, we could maintain democratic oversight over one of the most significant technologies of our time.
