Don't Blame Nonprofits for Firing Sam Altman
My biggest worry in the aftermath of the OpenAI debacle
Two Fridays ago, my phone came alive with an unusual flurry of messages all alerting me of breaking news — Sam Altman, CEO of OpenAI, had been fired. This was a seismic event in the world of technology, a startling move in terms of both suddenness and severity. There had been no signs the leader of the company responsible for creating ChatGPT would be summarily dispatched, in fact, quite the opposite as Mr. Altman had appeared on keynote stages with world and tech leaders literally hours before the news broke.
Sam Altman is out
The situation within OpenAI was chaotic as the company moved through interim CEOs on a nearly daily basis, then talks to bring Altman back began and quickly collapsed, and finally more than 700 of the roughly 770 employees of OpenAI signed an open letter criticizing the company’s board for mishandling the situation. A notable signatory was Ilya Sutskever, OpenAI’s Chief Science Officer, who many reported was the swing board vote that enabled Altman’s dismissal. Sutskever followed his endorsement of the open letter with a post expressing deep regret for his involvement in the board action.
Outside the company on the Sunday following Altman’s firing, Microsoft CEO, Satya Nadella, announced Altman would be joining Microsoft as CEO of a new advanced AI division of the company, and that all OpenAI employees who wished to follow their ousted leader would receive employment offers from the software giant, as well. Nadella’s play was a masterful move by OpenAI’s largest investor to de facto acquire nearly the entire company for free. To me, the move also felt like a dangerous consolidation of talent, resources, and sheer power in the race to develop AI.
Finally, on Wednesday a mere five days after all this began, the mad merry-go-round came full circle as OpenAI brought Altman back to resume the role of CEO. What precipitated all this drama is still unclear, though I expect we’ll gain a fuller picture in the weeks to come. Here is a key excerpt from the original statement issued by the OpenAI board announcing Altman’s dismissal:
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
The unusual structure of OpenAI
At this point, it’s helpful to understand OpenAI is uniquely structured as a hybrid entity, a for-profit company that is wholly governed by a nonprofit. The cofounders of OpenAI, including Altman, started the organization as a nonprofit research firm in 2015 and did so because they foresaw dangerous and irreconcilable conflict between the standard model of profit-making corporate purpose, and the safe and equitable deployment of all-powerful AI. They reasoned that a nonprofit structure was preferable because it would better enable them pursue a safer path for AI. Here is OpenAI’s mission statement:
OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.
As their work gained momentum, Altman and the other cofounders ran into the same nonprofit funding bottlenecks that hold many in our sector back. Funding was too slow to arrive and in insufficient quantities to drive the resource intensive work of developing cutting edge AI systems. So, the group devised a hybrid strategy, which was to form a capped-profit entity that could receive private investment at the speed and scale necessary to fuel the work, which would be controlled entirely by the original nonprofit entity. Here’s what the structure looks like:
With the necessary funding, and new equity stakeholders in the mix including Microsoft and top tier venture capital firms, work proceeded successfully, albeit quietly for several more years until last November 2022 when OpenAI released ChatGPT as a research experiment. In the weeks following the release, ChatGPT became the fastest growing app in history and today has more than 100 million weekly active users.
While the runaway success of ChatGPT was unexpected, OpenAI under Altman’s leadership did very well to capitalize on the rush of attention to quickly add capacity to take on new user signups, to hire some of the best AI talent in the world, to secure an additional $10 billion dollar investment and partnership with Microsoft, and to accelerate the launch of new AI models and products including GPT-4. Altman was recently on stage at OpenAI’s inaugural DevDay, a showcase hosted by the company to unveil their latest and most exciting products, which was anticipated unlike any such event since Apple’s Worldwide Developers Conferences were hosted by Steve Jobs.
The nonprofit scapegoat
OpenAI has reached heights only a handful of elite technology companies ever attain, and its ascent to the mountaintop was orders of magnitude faster than any of its peers before. Nothing is celebrated more in Silicon Valley than this kind of rocket-like trajectory. But the founders of OpenAI had been in and around this kind of thrill before, they had achieved glory, and most importantly they understood how wealth and power are wielded when technology companies become wildly successful. With this knowledge they chose not to pursue this path, intentionally, instead opting for a nonprofit structure so as to insulate the all-important work of deploying AI beneficially from the corrupting allure of making unfathomably large sums of money.
From the perspective of the private sector, OpenAI’s board’s behavior was incomprehensible. In an emergency episode of the Pivot podcast recorded just hours after the announcement of Altman’s firing, co-host Scott Galloway, a professor at NYU, perfectly encapsulates capitalist dismay as he grasps for words to describe what had unfolded,
I just would have been shocked they couldn’t figure it out given the amount of money that’s being created. That they couldn’t have figured out a way to say, OK, $10 billion to build, you know, parks and homes for vet[erans]… I would have just thought they would have tried to figure this out.
I’m a fan of Professor Galloway, but in this episode I winced when I heard his patronizing and dismissive portrayal of what nonprofits do. Even more importantly, I was dispirited by his total ignorance of why OpenAI was structured as a 501c3 in the first place. Galloway seems to view OpenAI’s hybrid model as something akin to Toms shoes famous promise that for every pair of shoes sold, they would donate a pair of shoes to a person in need, a feel-good model which has since been copied by numerous other consumer brands. He doesn’t seem to grasp that OpenAI was formed as a nonprofit because the founders believed profit-making was incompatible with human-centric AI.
Professor Galloway’s cohost, veteran tech journalist, Kara Swisher, had the correct intuition that the shakeup emanated from the fault line between OpenAI’s nonprofit and for-profit sides. But even Swisher’s finely honed journalistic viewpoint is tainted with condescension toward the nonprofit sector in speaking about the board in the following way,
They’re very, like, much more oriented towards nonprofit ideas of what ChatGPT should be versus a commercial operation. So, interesting, a lot of the board, in fact, when you look at it, is of that, in that direction. I don’t know if it’s effective altruism, but it’s definitely much more, it’s a crunchier group of people, let’s say.
I’m surprised by Swisher’s surprise that an organization designed precisely to prevent the domination of commercial interests over humanity’s interests in AI, and was founded as a nonprofit, would have a board that is 'more oriented towards nonprofit ideas of what ChatGPT should be versus a commercial operation.' The nonprofit was not a side project meant to burnish the corporate brand as Galloway implies, nor was it an afterthought to bring some ‘crunchier’ people along for the ride who ended up touching something they shouldn’t have as Swisher seems to hint at. It was invested with real power for an important reason, and we may be witnessing a profoundly troubling system failure.
My biggest worry in the aftermath of the OpenAI debacle
A few months ago I published a piece on these topics called, The 1% of OpenAI writing,
I see this evolution to a capped-profit model as OpenAI’s on-going attempt to build perhaps its most important safety measure — a firewall between its powerful AI technology and the siren call of wealth…
OpenAI’s exploration of hybrid profit models is every bit as important as the innovation taking place in their AI research labs. Capitalism, in its current form, is a dangerous environment in which to bring forth and unleash powerful autonomous systems because it holds human flourishing to be a desirable side effect of wealth creation.
What worries me most in the aftermath of this debacle is the equally if not more important innovative work OpenAI was doing to create a new kind of organizational vessel capable of conveying the world safely into the AI age, has suffered a massive setback. From appearances it seems the nonprofit board exercised its power clumsily and to ill effect, but I also believe the members who voted to oust Altman did so not out of greed or animosity, as is usually the case, but because they were trying to be good stewards of the mission. Helen Toner, a now former OpenAI board member who voted to remove Altman was confronted with the possibility that such a move could destroy the company, to which she replied, “That would actually be consistent with the mission.”
As part of the deal to bring Altman back, the board was reconstituted to remove the members who voted to oust him. A board that could once at least boast some gender and sector diversity, now lacks either as both women (one who worked in the nonprofit sector) have been replaced by Bret Taylor, formerly co-CEO of Salesforce and Larry Summers, former U.S. Treasury Secretary. These, I’m sure, are comforting changes for those who hold equity positions in the company, but for the rest of us I believe this is a net loss.
A few weeks ago in another piece I wrote called, Tech is from Mars and Values from Venus, I highlighted to opportunity to for the predominantly female nonprofit sector to shape AI deployment in positive ways for humanity proposing,
The time has come for the mostly-female nonprofit sector unveil the latest in human-centric and purpose-driven operating models for the mostly-male tech sector to learn from…
As I’ve written in previous pieces, the age of artificial intelligence will be defined by this existential challenge — to imbue the all-powerful machines of tomorrow, with humanity’s greatest wisdom and highest values, today. The innovators of silicon and algorithms must find a way to work with the innovators of justice and sustainability. AI, unlike any technology before it, demands the best of us, from all of us.
The restructuring of the OpenAI board to remove both women, including the only member from the nonprofit sector, while leaving in place only male technologists, even those who also voted for the leadership change, is the exact opposite of what progress toward safe and beneficial AI ought to look like. It is difficult to imagine a group of board members who have less in common with the average human being on this planet than the new board of OpenAI, and for a company supposedly committed to bringing beneficial AI to humanity, this matters.
Pushing back on sectorism
We also need to call out sectorism, a term I’ll coin here to describe the dismissive and derogatory way the nonprofit sector is viewed by folks in the private sector, especially in elite circles. They do not understand that our purpose-driven sector is an economic laboratory, a place of incredible innovation seeking to find ways to power vast organizations capable of providing goods and services to large swaths of society, animated by motives other than profit-seeking, like generosity and human consensus that certain work simply needs to get done even when the market won’t take care of it.
It was the centrality of human decision making at the core of nonprofit endeavors that inspired OpenAI’s founders to form as a nonprofit so that people would be in control of the powerful machines they wanted to create. I hope very much that Mr. Altman has not lost sight of this crucial insight and that he draws on his decades of experience as an entrepreneur and leader of entrepreneurs to view what has happened as the spectacular failure of an early prototype, the only remedy for which is to learn and try again.
Thanks for reading this edition of The Process, a free weekly newsletter and companion podcast discussing the future of technology and philanthropic work. Please subscribe for free, share and comment.
Philip Deng is the CEO of Grantable, a company building AI-powered grant writing software to empower mission-driven organizations to access grant funding they deserve.