Welcome to this edition of The Process, a free weekly newsletter and companion podcast discussing the future of technology and philanthropic work.
If you enjoy reading and listening, please consider subscribing and sharing with your community!
Thank you!
Philip
Oh the bounty of current events for making AI comparisons! We have films about grappling with the implications of unleashing technology with world-destroying power and about inanimate human figurines coming to life to teach us about our own society. As if that wasn’t enough, senior U.S. government intelligence and military officials testified before Congress that “non-human biologics” have been recovered from what they purport to be extraterrestrial spacecraft. Is it me, or are things getting weirder, faster? All of those might actually be a bit too on-the-nose for me to be able to provide you with an original take. Instead, let’s start with a number that jumped out at me in a fantastic piece in The Atlantic by Ross Andersen — 99%.
The origins of OpenAI
99% is the proportion of OpenAI’s staff working under its for-profit division, an entity called, OpenAI LP. OpenAI is the creator of chatbot, ChatGPT, which rocketed to global prominence beginning in November 2022. Founded in 2015, OpenAI began as a nonprofit entity, OpenAI Inc., with the aim of creating artificial general intelligence and doing so expediently and ethically so as to lead to the best outcomes for humanity. In particular, the nonprofit structure was chosen because the founders wanted to insulate the organization’s work from profit-driven motives, which they felt could pull the venture toward potentially misaligned or harmful outcomes.
So, how did we go from 100% nonprofit in 2015 to 99% for-profit in 2023? The cynical perspective is probably the most intuitive — the principals saw the potential for unfathomable wealth creation and had a change of heart. However, I don’t think this apparently obvious scenario is what really happened, and tracing the progress of this pivotal player in AI reveals some interesting insights about the future of organizational structure and how nonprofits must play a critical role.
When OpenAI began work as a nonprofit in 2015, they published an announcement which included the following statements:
Since our research is free from financial obligations, we can better focus on a positive human impact… Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.
We’re hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.
Back then, the new entity didn’t generate much of a stir outside of tech circles, and if anything it was mostly because of the big names involved in its inception. AI had fallen slightly out of fashion as bold predictions about how much of our lives would soon be automated, had failed to materialize on time. At the time, the most well-known type artificial intelligence systems were expert systems, which are created by software engineers who program them to follow intricate sets of rules and logic protocols.
Deep Blue, the chess-playing computer famous for defeating Gary Kasparov in 1997, is an example of an expert system, in which programmers have added in all the rules of chess to allow the system to calculate the interactions of pieces on the board to make best moves.
Expert systems can become extremely effective at accomplishing the tasks for which they are designed, like winning games of chess, but they are notoriously brittle when the objective or conditions deviate from what it has been programmed to encounter. While chess is an extremely complex game with a huge number of game sequences and scenarios that are possible, it is simple compared to say, driving a car, and it would be infeasible for engineers to try to program responses to every possible scenario one might encounter on the road.
A different approach to AI
The creators of OpenAI were fascinated by a different type of AI system, deep learning systems. Instead of receiving rules from programmers, deep learning systems receive mountains of training data and reinforcement feedback. All of this information passes through neural networks, layers of sensors constructed to loosely imitate neuronal structures in our brains, and from which emerges an incredibly intricate web of hundreds of billions of probabilistic relationships. This is the process that created GPT3 and GPT4, the large language models that power ChatGPT.
What enables the amazing abilities of deep learning systems to arise is the combination of a huge amount of data, a massive amount of computing power, and skillful application of cutting edge algorithmic techniques. OpenAI quickly realized that its nonprofit structure was impairing its ability to procure these key ingredients, all of which cost a lot of money. Creating internet-scale training data sets and assembling world-leading supercomputing resources is incredibly expensive, and convincing the top minds in AI to forego 8-figure compensation packages at leading tech firms to join a small nonprofit startup can be a tough ask.
Business on hard mode
In 2019, to address these concerns, the leaders of the company created OpenAI LP, a kind of hybrid organizational structure they call a ‘capped-profit’ entity. Here’s what the company said when they announced the formation:
We’ve experienced firsthand that the most dramatic AI systems use the most computational power in addition to algorithmic innovations, and decided to scale much faster than we’d planned when starting OpenAI. We’ll need to invest billions of dollars in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.
We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance. Our solution is to create OpenAI LP as a hybrid of a for-profit and nonprofit—which we are calling a “capped-profit” company.
The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission, which allows us to raise investment capital and attract employees with startup-like equity. But any returns beyond that amount—and if we are successful, we expect to generate orders of magnitude more value than we’d owe to people who invest in or work at OpenAI LP—are owned by the original OpenAI Nonprofit entity.
A group of the world’s leading capitalists tried #nonprofitlife for a while and realized what those of us in the sector have known all along — nonprofits are businesses running on ‘hard mode’. The same brilliant people, working on the same important mission, who have now raised tens of billions of dollars and become one of the most important technology companies in the world, couldn’t overcome nonprofit fundraising headwinds. Did they try a fundraising gala? Next up, a generously donated timeshare weekend retreat in Orlando! Do I hear $10 billion?
If only most nonprofits also had the option to flip the profit switch back on like OpenAI did with the capped-profit entity. Governance of the for-profit arm is described as the following:
OpenAI LP’s primary fiduciary obligation is to advance the aims of the OpenAI Charter, and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake.
A hybrid model
Investors in the for-profit entity agree to a cap on their return of 100X the initial investment. Additionally, the nonprofit board must maintain its composition so that a majority of board members do not hold a financial stake in the company. Sam Altman, OpenAI’s CEO, declined an equity stake in the company for this reason. A mentor of his, Paul Graham, founder of the world famous startup accelerator, Y Combinator, mused about a fuller explanation of Altman’s motivation in a recent New York Times article,
“Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does. The other is that he likes power.”
Paul Graham
Today, OpenAI LP has raised huge sums of investment in the wake of its somewhat accidental world-shaking release of ChatGPT last fall. The company is growing fast, hiring to keep pace with its astonishing rate of user adoption and proliferation of GPT integrations across the global tech industry. As I mentioned earlier, 99% of the company’s headcount is now, unsurprisingly, in its for-profit arm. As for the 1% on the nonprofit side of things, they’re focused on educational programs and policy initiatives according to the website.
Another of OpenAI’s cofounders, Elon Musk, has frequently disparaged the company pointing to these same facts to accuse its leaders of abandoning the mission and selling out. This isn’t the first time I think Elon might be wrong. I don’t believe the company’s purpose-driven origins were a ruse, nor do I think the current lopsidedness of staffing means the nonprofit side has become a facade. Speaking from experience, starting a nonprofit is a hassle, a bad way to go about making a lot of money, and I have seen little evidence of reputational benefit to the company.
A parallel innovation
I see this evolution to a capped-profit model as OpenAI’s on-going attempt to build perhaps its most important safety measure — a firewall between its powerful AI technology and the siren call of wealth. In the same piece in The Atlantic, Andersen interviews OpenAI’s Chief Scientist, Ilya Sutskever, about the difficulty of keeping AI in alignment with human values,
In San Francisco, I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
“I don’t want it to happen,” Sutskever said, but it could. Like his mentor, Geoffrey Hinton, albeit more quietly, Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness. It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
Alignment is a technical problem, and it is much more. OpenAI’s exploration of hybrid profit models is every bit as important as the innovation taking place in their AI research labs. Capitalism, in its current form, is a dangerous environment in which to bring forth and unleash powerful autonomous systems because it holds human flourishing to be a desirable side effect of wealth creation. More or less, the thinking goes, if everyone focuses on making as much money as possible, all of us will see our situations improve.
Even the most single-minded capitalist is not literally only thinking about making money every moment of the day. They are concerned with living the life such a pursuit allows for. Machines, on the other hand, will strictly and literally race toward the goals we give them, without pause. With autonomous systems as powerful as those forthcoming in AI labs around the world, which are rapidly integrating with global software infrastructure, our well-being cannot remain an afterthought. It must be explicit, and it must be sacrosanct.
Can we evolve so that wealth creation becomes the desirable side effect of human flourishing, instead of the other way around? Somewhere between all-profit and no-profit, there is ample room to look for something new.
Thanks for reading this edition of The Process. Please share and comment or you can email me at philip.deng@grantable.co
Philip Deng is the CEO of Grantable, a company building AI-powered grant writing software to empower mission-driven organizations to access grant funding they deserve.
Bonus content!
I had a blast speaking with host, Nate Birt, on the Secrets of Social Impact Communicators podcast. Give it a listen here: