International Finance
Magazine Technology

Sam Altman & the OpenAI boardroom drama

IFM_ Sam Altman
OpenAI’s board reportedly lost confidence in Sam Altman’s ability to lead the organisation and accused him of not being candid in his conversations

From November 17-22, OpenAI, the pioneer behind the breakthrough generative AI tool ChatGPT, saw the worst crisis of its eight-year-long existence, as its maverick CEO Sam Altman got kicked out by the board of directors and joined Microsoft (the AI firm’s key investor) for a few hours and then came back to head his old company.

The analysts now feel that the incident has set up an example of how a company’s corporate affairs should not be run by persons, who, apart from having a little background in business administration, can compromise their oversight duties by making decisions based on their superficial way of seeing things.

After coming back at OpenAI’s helm, Altman sacked the board. It now has Adam D’Angelo (from the previous board), CEO of Quora, ex-Salesforce co-CEO Bret Taylor and former US Treasury Secretary and president of Harvard University, Larry Summers.

The previous panel had Tasha McCauley (heading GeoSim Systems) and Helen Toner (an expert on AI and foreign relations at Georgetown’s Centre for Security and Emerging Technology). McCauley is one of the core heads of the United Kingdom board of Effective Ventures, a group affiliated with effective altruism, and Toner also worked for the US-based effective-altruism group Open Philanthropy.

In the new board, neither Sam Altman nor OpenAI co-founder Greg Brockman will feature, and the panel will soon have six additional members.

Transparency went missing

OpenAI’s board reportedly lost confidence in Altman’s ability to lead the organisation and accused him of not being ‘candid in his conversations’. Altman wanted to transform the venture from a non-profit into a commercially viable business, which brought him into confrontation with the board. The board felt that the tech maverick was moving too quickly on the AI front, “Without sufficient concern to the safety implications of a technology that, left unchecked, could create content capable of harming the public.”

One may attempt to believe the possibility of independent directors like McCauley and Toner getting influenced by the section of sceptics who thought that OpenAI was moving away from its mission of “building safe and beneficial artificial general intelligence for the benefit of humanity” for commercial gain.

Earlier in November, tech companies and Western governments decided upon a new safety testing regime to allay concerns about AI’s growth pace and the lack of global safeguards in place to control it. United Nations Secretary-General Antonio Guterres said that the world was “playing catch-up” in efforts to regulate AI, which had “possible long-term negative consequences on everything, from jobs to culture”.

Coming back to OpenAI, Sam Altman’s firing wasn’t due to his company’s financial, business, safety or security/privacy practises.

The directors’ duty, as written by the OpenAI’s organisational structure, was to ensure that the AI benefits the entire humanity. Did they get burdened by that responsibility, while removing Altman? Were they influenced by the ‘AI Apocalypse’ worries?

OpenAI cofounder and chief scientist Ilya Sutskever recently spoke about the venture anticipating a technology breakthrough that may come with ‘safety concerns’. However, Sutskever himself quashed the possibility of the board acting out of ‘AI Apocalypse’ fears, while removing Sam Altman.

Neither Emmett Shear, co-founder of video streaming site Twitch, who took over the AI venture’s leadership duty briefly, nor Satya Nadella, the man leading the tech biggie Microsoft (which also holds a 49% stake in the OpenAI) were informed about the actual reason behind Altman’s removal.

In 2019, Altman transformed OpenAI into a for-profit unit to draw commercial investors, before launching ChatGPT in 20222-end. What started as a research lab became a professional tech company.

In November 2023, OpenAI hosted its first developer conference, where Sam Altman announced an app store for chatbots.

The old OpenAI board functioned as an entity independent of the for-profit company. Ilya and the three independent directors formed the majority needed to make the leadership changes, while following the organisational bylaw which allows for the removal of any director, including the chair, with/without a cause.

Ilya’s behaviour makes him a contradictory person. After removing Sam Altman, he “deeply regretted” his role in the board’s actions, thereby taking a U-turn from his earlier concern, where he spoke about OpenAI’s fast-paced commercialisation of its technologies compromising on the safety front.

Ilya Sutskever faces the heat

From November 17-21, OpenAI faced a massive internal backlash over Altman’s firing, with 743 out of 770 of its staff threatening to quit the company and join Microsoft en masse. Their condition was simple, the removal of the board, including Sutskever, who, as per The Atlantic, “likes to burn effigies and lead ritualistic chants at the company, and appears to have been one of the main drivers behind Altman’s ousting.”

As Sam Altman briefly joined Microsoft, a perplexed Ilya Sutskever wrote in his X account, “I never intended to harm OpenAI.” The crisis came for OpenAI at the time when it was eyeing to achieve the USD 90 billion valuation target.

“Sutskever has established himself as an esoteric ‘spiritual leader’ at the company, cheering on the company’s efforts to realise artificial general intelligence (AGI), a hazy and ill-defined state when AI models have become as or more capable than humans, or maybe, according to some, even godlike,” The Atlantic commented. Altman too championed attaining AGI as OpenAI’s number one goal.

As per the reports, Ilya Sutskever, apart from making the employees chant, “Feel the AGI! Feel the AGI!”, even commissioned a wooden effigy to represent an “unaligned” AI that works against the interest of humanity, only to set it on fire.

Now imagine, such a volatile personality, with three other independent directors (with two of them coming from philanthropic backgrounds), deciding to fire Altman, who himself stresses about AI being governed, so that the tool acts responsibly towards humanity.

“Instead of focusing on meaningfully advancing AI tech in a scientifically sound way, some board members sound like they’re engaging in weird spiritual claims,” this is how The Atlantic summed up the situation, which also justifies the reason why Sam Altman wanted to fire the board so desperately.

A delusional board

In 2015, OpenAI began as a nonprofit research lab, with the mission of developing artificial intelligence on par or beyond the human level—termed artificial general intelligence or AGI, in a safe way.

As per Sutskever, the tech venture found a promising path in large language models (LLMs), as the latter started generating strikingly fluid text. However, developing and implementing those models required huge amounts of computing infrastructure and capital. So, OpenAI created its commercial arm to draw outside investors. Sensing the opportunity, Microsoft jumped into the fray. Apart from helping OpenAI to develop and launch ChatGPT, the Satya Nadella-led venture is also using the start-up’s solutions like Bing Chat and Co-pilot to improve its products.

“Virtually everyone in the company worked for this new for-profit arm. But limits were placed on the company’s commercial life. The profit delivered to investors was to be capped—for the first backers at 100 times what they put in—after which OpenAI would revert to a pure non-profit. The whole shebang was governed by the original non-profit’s board, which answered only to the goals of the original mission and maybe God,” Wired commented.

“We are the only company in the world which has a capped profit structure. Here is the reason it makes sense: If you believe, like we do, that if we succeed well, then these GPUs are going to take my job and your job and everyone’s jobs, it seems nice if that company would not make truly unlimited amounts of returns,” Ilya Sutskever told the media outlet.

So the picture is clear now. While profit-seeking was a must for OpenAI to carry on research activities, the board’s responsibility was to ensure that “AI doesn’t get out of control.”

The board was playing the holy role of “Guardian of Humanity”. Now reports are suggesting that through ‘Porject Q’, Altman got a breakthrough in OpenAI’s long search for AGI, as the new model solved math problems. However, unlike a calculator, AGI can generalise, learn, and comprehend.

OpenAI describes AGI as ‘autonomous systems that surpass humans in most economically valuable tasks.’ While the venture’s Chief Technology Officer Mira Murati had acknowledged the existence of ‘Project Q’ in an internal email to employees, she also alerted them to ‘certain media stories’ without commenting on their accuracy.

The board was warned about the potential dangers which the model could bring. However, there was no clarity on what those dangers were.

What is even more perplexing is the fact that Altman himself warned about the AGI’s cons, as he wrote in one of his blogs, “AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”

As Wired spoke with a source familiar with the OpenAI board’s thinking, it emerged that the decision makers through the firing of Sam Altman would make sure that the company developed powerful AI safely.

“Increasing profits or ChatGPT usage, maintaining workplace comity, and keeping Microsoft and other investors happy were not of their concern. In the view of directors Adam D’Angelo, Helen Toner, and Tasha McCauley—and Sutskever—Altman didn’t deal straight with them. Bottom line: The board no longer trusted Altman to pursue OpenAI’s mission. If the board can’t trust the CEO, how can it protect or even monitor progress on the mission?” the report commented further.

Instead of initiating a discussion with Sam Altman on ‘Project Q’, the board decided to arm-twist the tech maverick and force him to leave his own company. The result was completely the opposite. There is no doubt in the fact that, when it comes to generative AI, Altman is a cult hero, A persona who is the pioneer of the 21st century AI revolution and at the same point of time, talks about the tool’s responsible and regulated usage.

Microsoft hiring him for a brief period proved the above point right. If Satya Nadella doesn’t maintain high regard for you, no one else will, in Silicon Valley.

“Altman did little or nothing to dissuade the outcry that followed. To the board, Altman’s effort to reclaim his post, and the employee revolt of the past few days, was kind of a vindication that it was right to dismiss him. Clever Sam is still up to something! Meanwhile, all of Silicon Valley blew up, tarnishing OpenAI’s status, maybe permanently,” Wired commented further.

While the instance of 743 out of 770 OpenAI staffers, in their open letter, asking the board to quit and accusing the members of being “incapable of overseeing OpenAI,” might sound unprecedented, these aggrieved professionals were correct from their standpoint.

As per the Wired report, the board compared the staff outburst similar like negotiating with terrorists. Isn’t it a prime example of being in delusion? If the board members felt Sam Altman was not being honest in his communication with them, they should have talked the matter out, rather than jumping the gun and forcing the OpenAI CEO to leave his venture, thereby opening the Pandora’s Box called ‘chaos’.

“Having deleted his distrust of Altman, Sutskever and Altman have been sending love notes to each other on X, the platform owned by Elon Musk, another fellow OpenAI cofounder, now estranged from the project,” the report noted.

In fact, in the worst-case scenario, had Sam Altman stayed in Microsoft’s AI division, OpenAI staffers have joined him too, thereby orchestrating the death of the tech sector’s most happening start-up.

A New York Times report even claimed that OpenAI leaders thought that allowing the company to be destroyed “would be consistent with the venture’s mission.”

However, things changed, as the board came to its senses and agreed to Altman’s return as OpenAI CEO. Two of the directors resigned, leaving only D’Angelo on the board. The latter was joined by Bret Taylor and Lawrence Summers. The new board and Sam Altman need to decide upon a couple of things. Should the venture continue as a non-profit one with a for-profit arm? In case a ‘Project Q’ like situation breaks out again, how to handle it without forcing important officials to resign?

Had OpenAI been dissolved, Microsoft would have gained immensely. OpenAI is known for pioneering the 21st century’s AI revolution and gaining the venture’s talent pool would have been the best coup in Silicon Valley’s history.

It would have been the best example of a venture like OpenAI, formed intending to thwart Silicon Valley biggies from dominating AI technology, delivering its talent and research infrastructure to a multi-trillion-dollar giant.

“Microsoft would have no qualms whatsoever about pocketing truly unlimited amounts of returns from future breakthroughs from the ex-OpenAI staff—something that anyone who was thinking of following Altman over there might have pondered, in light of their previous time at a company with different founding principles,” Wired summed the hypothesis up further.

However, OpenAI will continue to exist and as per reports, is now working on PPO (Proximal Policy Optimisation), which is a reinforcement learning algorithm used to train AI models to make decisions in complex, or simulated environments. While PPO’s versatility allows it to excel in scenarios like robotics, autonomous systems, and algorithmic trading, the venture has adopted PPO in a variety of use cases, from training agents in simulated environments to mastering complex games. The latest buzz is that OpenAI is now aiming to achieve AGI through gaming and simulated environments with PPO’s help.

Both the tech industry and OpenAI need each other, given the fact that ‘Innovation’ and ‘Positive Disruption’ keep the sector moving. ChatGPT, Microsoft 365 Copilot, and CoAssit are all being built and run on NVIDIA’s AI supercomputer and data centre infrastructures. Add Microsoft’s stable backing towards OpenAI. What we are witnessing right now is the formation of an ecosystem, which is set to change the industry forever.

Therefore, a stable corporate boardroom is a necessary one, both for OpenAI and this ecosystem.

What's New

ROSHN: Shaping Saudi’s Urban Vision

WebAdmin

Regulation around AI is needed: iQmetrix Senior VP of Revenue Jason Raymer

IFM Correspondent

The battle against SIM card theft

IFM Correspondent

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.