The new OpenAI has a lot of explaining to do
The show is over but the company’s ethics and mission are even more in question than before. Here’s why.
KOSTAS FARKONAS
PublishED: November 27, 2023
Never let it be said that big tech doesn’t hold any real surprises for the world: the last 10 days have offered everyone the kind of spectacle and drama that showbiz itself would be proud to have produced. That might very well be a problem in and of itself, of course, but that comes later. What comes first is this: the most important organization when it comes to research on artificial intelligence, OpenAI, the company that brought ChatGPT to the masses, has been in turmoil as an intense clash of perspectives – and a clash of interests as a result – within its ranks finally came to a head. For reasons still not made clear by any of the involved parties – rather tellingly, if not suspiciously – the board of directors at OpenAI fired the CEO and co-founder of the company, Sam Altman, on Friday, November 17th.
What followed was a rollercoaster of events that shocked, amazed and disappointed millions of people, dominating headlines and social media streams for days. These events themselves have been so extensively reported on already that there’s hardly any point commenting on anymore. But there certainly is a point in asking OpenAI – if this whole show is now truly over – a lot of hard questions. What follows are the reasons why that is, as well as the questions themselves. For future reference, if nothing else, read on.
What happened over at OpenAI that we know about
So here’s a quick reminder of all that’s gone down publicly at OpenAI just over a week ago. The board of OpenAI fired Sam Altman on Friday, November 17th, president Greg Brockman retired in solidarity a few hours later, three high-level researchers followed suit, the company CTO Mira Murati took over for the next day or two as interim CEO while fingers were pointed at Ilya Sutskever, OpenAI’s chief scientist, as being instrumental in all of this. Negotiations between OpenAI’s board and Altman for his return happened at some point during that weekend, but they fell apart.
Then things got really weird.
Microsoft announced that Altman and Brockman were joining the company in order to lead a new AI division. The OpenAI board announced former Twitch boss Emmet Shear as the company’s new interim CEO. Hundreds of OpenAI employees – including Murati and Sutskever, who apparently had a change of heart – signed a letter threatening to leave the company for jobs at Microsoft unless Altman was reinstated. Altman, however, would not consider returning to OpenAI unless the board of directors that fired him was replaced, which it did on Wednesday. The seats of the new board are not finalized – there will be more people, including Altman, and someone from Microsoft most probably – but at least there’s “an agreement in principle” for Altman’s and Brockman’s return.
As for OpenAI? After all that’s happened during the past few days regarding its leadership, it kind of has and hasn’t got a proper leader at the same time in Sam Altman (Schrödinger would be proud). See, the interim CEO is not in a position to make any important decisions, but Altman has not yet officially come back as CEO and OpenAI has not made any public statements since Wednesday. It all ended as abruptly as it began on November 17th.
These are the facts made public. So here we are – ten days after this tasteless, unsettling show began – and OpenAI has the same CEO, the same president, the same employees, a different board of directors but, in theory, the same mission. Is it the same company? No. A few important aspects of OpenAI are now changed. Οn top of that, some things we did not know – or hadn’t realized – about OpenAI, things that carry great significance after all, have come to everyone’s attention. The very same things, in fact, that most probably led to this mess in the first place. Let’s take a look.
What the events at OpenAI were really about
All that took place during the past few days invited a closer look at what OpenAI has been doing since ChatGPT took the world by storm a year ago and, well, it’s not a pretty sight. As it turns out, the widely-known conflicting ideologies between the two different “camps” within the company – the camp calling for the fast commercialization of AI and the camp calling for more careful AI research as well as AI adoption in a slower, probably regulated, manner – is more of a chasm than a difference in opinion. What’s more, it now seems that it was something Sam Altman (who firmly belongs to the AI pro-commercialization camp) did or didn’t do that led to the events of November 17th.
Independent reports by a number of different publications provide more pieces of the puzzle. According to The New York Times, a few weeks before all that went down at OpenAI Altman confronted Helen Toner, a former member of the company board, about a paper she had co-authored in October regarding ChatGPT. Toner is the director of strategy at Georgetown’s Center for Security and Emerging Technology (CSET) and a firm believer in the careful advancement of AI, in its development transparency and in safety precautions taken in the way AI is implemented in various projects. Toner criticized OpenAI in that paper for making ChatGPT publicly available last year in its well-known, flawed form, as its very release sparked “a sense of urgency inside major tech companies” to ensure they did not fall behind and “accelerate or circumvent internal safety and ethics review processes” as a result.
A different report provides another piece of the puzzle: according to Reuters, a short time before the original OpenAI board fired Altman, a number of staff researchers sent to its members a letter “warning of a powerful artificial intelligence discovery” that “could threaten humanity”. This scientific breakthrough is thought to be connected to a project called Q* (Q-Star) and the company’s constant inching towards the ultimate goal of creating an Artificial General Intelligence (AGI) agent. AGI is also known as “true artificial intelligence”: an autonomous system capable of “reasoning” in the sense that most of us use that term today. Sam Altman himself seemed proud enough to claim just two weeks ago in the Asia-Pacific Economic Cooperation CEO Summit in San Francisco that major advances in AGI were in sight at his company, as “OpenAI pushed the veil of ignorance back and the frontier of discovery forward recently”.
Whether that was the straw that broke the camel’s back in the eyes of OpenAI’s board of directors or not – the possibility that Altman had not informed its members of this breakthrough – is still unknown and debatable. It might explain the board’s claim of Altman “not being consistently candid in his communications” with its members, but it might also not be the only such case the board was referring to. What is not debatable, however, is the fact that this board was already concerned by Altman’s apparent determination to commercialize artificial intelligence products and services before getting to understand the consequences of such a choice in full.
Altman was even making moves that – in theory – have precious little to do with the primary mission of OpenAI, like his efforts to secure funds for running a new AI chip-making company: according to Bloomberg, Altman wanted to compete with nVidia in the AI-acceleration processors’ space, providing more cost-effective solutions to customers.
So now… questions – starting with “why?”
Put together, all the information mentioned above might be enough to describe how things came to a head at OpenAI, leading to the events of November 17th. But it’s not enough to explain why… which is what anyone interested in the future of artificial intelligence would like to know. Why did the original board of directors fire Altman? What was the reasoning behind that move? What was the excuse that triggered it? In short: why this way, why now?
One possible answer is this: OpenAI’s unusual structure is not fully aligned – or even aligned at all anymore, some might say – with its mission statement. OpenAI is, in theory, a non-profit research organization focused on AI. In its own words it claims to be “an AI research and deployment company” whose mission is “to ensure that artificial general intelligence benefits all of humanity”. In practice, though, this non-profit relies on a for-profit arm that attracted generous funding from private investors or corporations whose interest in AI is not humanistic or scientific. It’s purely commercial and financial.
These two arms of the same company depend on one another, but only one of them is even partially interested in the more deliberate, safe and possibly regulated advancement of AI. The non-profit would most probably not be able to continue operating at the same pace without the for-profit arm, but the non-profit arm is also the one doing the research and development on which products such as ChatGPT or DALL-E are based. These are products that generate hype around AI, attracting more investors and funding. But investors constantly demand accelerated research and commercialization of a non-profit whose primary concerns regarding AI should, theoretically, lie elsewhere. It’s such a difficult act to balance that, frankly, it’s surprising that OpenAI’s internal clash didn’t come to a head much sooner than it actually did.
It’s not like this was not already apparent to others too. Say what you want, for instance, about Elon Musk – yours truly never minced words when it comes to his opinion about that man – but he was right on something he has repeatedly stated: the principles on which OpenAI was founded back in 2015 by him and others are simply not the same ones the company is now operating on, especially after Microsoft’s huge investment in 2019 onwards.
Musk has clear motive to make such a claim (he owns a competing product to ChatGPT named Groc), but this does not make it any less true: for a company called “OpenAI”, this one operates in the least open, most opaque manner possible. Nobody outside OpenAI actually knows how far AI or AGI research has come in the company’s labs. It’s not clear who the company shares its findings or progress with. The level of AI tech Microsoft or other investors have access to is also not publicly disclosed. This is definitely not how one would describe an “open” non-profit organization.
Quo vadis, OpenAI?
It’s fair to say that in the heart of the events of November 17th lies Sam Altman as a business leader as well as a representative of the “boosters camp” (the one asking for accelerated AI development and fast deployment at scale) within the company and outside its walls. But there are other questions that OpenAI needs to provide answers for, regardless of Sam Altman’s status going forward.
An obvious one, for instance, refers to its the very nature as a company: is OpenAI still a non-profit organization, if it’s largely funded by private investors and corporations that look forward to direct financial benefit from the advanced research and development of AI? Can it still claim it strives to ensure that AGI benefits all of humanity, when it allows clearly unproven and flawed AI tech – such as ChatGPT – to be mass adopted in consumer products and services like Windows and Office 365?
What is important to understand is that this is not a question directed at OpenAI’s leadership alone. If 90% of the people employed in this “non-profit” organization threatened to go do the exact same work for a private company, Microsoft, unless Altman – who openly belongs to the “boosters camp” – was reinstated as CEO, then what does that say for OpenAI as a whole? Is the remaining 10% enough to convince anyone that this is an organization with humanity’s best interests at heart when it comes to AI/AGI research and development?
Mentioning Windows and Office 365 also leads into another question: since Microsoft has invested so much in OpenAI – and was so eager to just hire 90% of this company if its appointed board of directors had not stepped aside in order to bring Altman back – isn’t this already conflicting with the very nature of a non-profit which is supposed to share its findings with the world on such an important area of research? Is everyone else in the tech community or the technology market in general OK with a single corporation having such a close-relationship advantage over the rest? If anything, Microsoft’s involvement in OpenAI’s operation will be even deeper from now on, with Altman back at the helm triumphant and a Microsoft executive most probably having a seat in the new board of directors.
Talking about the new board of directors, that is actually the most important reason why OpenAI is not the same company it was just 10 days ago. Helen Toner and another OpenAI board member also not supporting the fast commercialization of AI, Tasha McCauley, were the only ones replaced. Alan D’ Angelo, also a former member, remains on the new board but his main concern has always been with the OpenAI’s investors’ interests: in other words, he stands with Altman in principle, if not in essence. Ilya Sutskever, the company’s chief scientist who also remains, was thought to be neutral towards AI having used either camp’s arguments at one point or another – but since he flipped sides in support to Altman and his return to OpenAI, one assumes that he’ll be taking his side in the new board’s sessions on practically everything. Altman’s and Brockman’s position on AI development is clear and it’s safe to assume that whoever represents Microsoft on the new OpenAI board of directors will also belong to the same camp.
So the last obvious question is this: who will represent the camp questioning the “boosters camp”, the one mockingly called “doomers camp” that prefers a more careful, deliberate approach in artificial intelligence research and possibly government regulation, going forward? Is OpenAI interested in having board members that offer opinions not aligned to its plans for commercializing artificial intelligence in the “move fast and break things” style of Silicon Valley? Is it open (ha) to the inclusion of voices in its board opposing those plans, in the spirit of a balanced approach? Even for appearances sake?
The coming weeks and months will hopefully be offering answers to these questions, but let’s just say that – after the events of last week and the general outcome of the whole situation – OpenAI seems to have become a different kind of research organization. One for which the term “non-profit” does not seem right anymore. As for what this means for artificial intelligence and artificial general intelligence, it will be revealed in the fullness of time. For better or worse, one way… or another.