As the dust continues to settle in the wake of OpenAI CEO, Sam Altman’s firing and re-hiring, it seems like nonprofits took the biggest hit. It’s kind of like we caught an errant elbow to the eye in someone else’s fight. 501c3s were casually dissed by most of the tech and media world who pointed fingers of blame at the nonprofit board that oversees (at least for now) OpenAI. They seem to hold it responsible for rocking the $90 billion dollar boat by questioning and relieving the skipper, Altman, who pulled a Jack Sparrow-esque maneuver taking back the ship without firing a shot.
Let’s assume for the moment everyone learns the wrong lesson (translation: not the lesson I wrote about last week 😘) from the Altman fiasco and we stop trying to find ways to de-couple or firewall AI from profit-making.
Instead, we push the automation pedal to the metal to accelerate workforce disruption at an unprecedented rate changing almost all jobs, and eliminating many all together. Anytime AI labor can approximate or surpass human levels of quality at a lower cost, for-profit companies will be highly incentivized to replace people with machines, and will find themselves powerless to resist as capital dominates. Let’s imagine what this might look like.
When a profit motive is present, most if not all of an organization’s decision-making can be reverse engineered to that aim. Simplistically speaking, this binary financial focus — if more profit then do more, if less profit then do less — though simple, enables astonishingly large, intricate, powerful, and valuable enterprises to emerge. Walmart has 2.1 million employees, more than the populations of 15 U.S. states, and every person’s role is precisely directed at this single purpose. Apple has a market cap of $3.01T, more than the national GDP of all but the 6 largest economies in the world, another monument of singular alignment.
Such organizations are magnificent engines of innovation, value and wealth creation. But what has become increasingly clear is the ruthlessness of the simple profit heuristic at the heart of for-profit companies, and our relative powerlessness to guide their progress. The showdown between OpenAI’s nonprofit board and its for-profit stakeholders encapsulates the dynamic well, as Ezra Klein recently noted in a piece writing,
[T]he nonprofit board was at the center of OpenAI’s structure for a reason. It was supposed to be able to push the off button. But there is no off button. The for-profit proved it can just reconstitute itself elsewhere.
There are those like Google CEO, Sundar Pichai, comparing the transformative potential of AI on society to that of electricity, and while I think it apt, the analogy breaks down when we think about the speed at which our current transformation will unfold. Electrification took place over approximately 70 years from the mid 1880s to the 1950s in most developed nations, and is still taking place today in some rural parts of the world. When OpenAI released ChatGPT just one year ago in November 2022, it needed less than 2 months to reach 100 million people. The timescale of the humanity-wide impacts of AI will not be measured in years, it will be measured in months, if not weeks.
Conventional wisdom says that when transformative new technologies arrive they eliminate some jobs on the way to unlocking vast new economic opportunities. I’ve offered this logic, myself, in a piece I wrote earlier this year drawing a comparison between AI and sewing machines, the latter of which obviated the role of seamstress as it birthed the fashion industry.
While this pattern has played out consistently throughout history, it occurs to me now that AI, unlike previous inventions, has the potential to outpace human innovation, itself.
I have no doubt people can come up with fantastic new ways to make a living alongside and with AI, I just wonder if we can do so fast enough given the pace at which some predict advancements will come. Imagine a tech-savvy artist learns how to use the latest AI-powered graphic design tools to create captivating imagery for her clients, only to have a more powerful AI model debut that allows her clients to create the same kinds of graphics without her. How quickly and how often are we capable of reinventing ourselves?
So what do nonprofits have to do with all this? I predict nonprofits will be particularly resilient against AI-caused human dislocation, like an oasis in a firestorm or high ground in a flood. Why is the Walmart shelf stocker’s job more at risk than the food bank employee’s? Profit and lack-there-of, respectively.
When a profit motive is not present, organizational decision-making does not reduce to a handy heuristic. Instead decision-making is often messy, complex, and confounding because people, not numbers, must make the choices. Furthermore, when the group decides to act, money doesn’t flow naturally downhill as it does toward profitable companies, people must go into the world to continuously make the case to yet more people, funders and philanthropists, who also get to decide whether or not to support the endeavor.
All this human dialogue is so incredibly inefficient when compared to the simple logic of for-profit models, and yet for all its drawbacks, having people at the heart of the enterprise creates space for values (other than profit-making) to live. This is why its founders created OpenAI as a nonprofit, because they recognized that AI must be guided by human-centric values in order to be a positive force in the world, a fact so many seem to have overlooked or forgotten.
This space for values is the firewall, the levy that protects nonprofit sector work from the brutal math of the private sector, which AI will vastly accelerate. To reiterate, while I believe people are more than capable of adjusting to new technology to create new industries and forms of employment, I am concerned the pace of change instigated by AI will be unprecedented and will test our adaptivity to its limits.
If, as I suspect, large swaths of the workforce are unable or unmotivated to maintain a relentless pace of personal and professional reinvention, they may find refuge in the nonprofit sector where there is ample work to be done, absent the incentive to replace people with AI.
I want us to begin imagining a world in which the private and nonprofit sectors are inverted, where most people work for purpose-driven organizations pursuing goals that are the product of human wisdom, morality, and agency. While nonprofits are still so commonly dismissed with a pat on the head as only being good for working on small, feel-good problems, this will soon change as AI reminds us of the value of our values.
Nonprofits already account for 1 in 10 jobs in the U.S. and roughly 6% of GDP, or roughly the same size as the entire restaurant industry. We should be asking ourselves, what needs to happen in order for our sector to be able to accommodate 9 in 10 U.S. jobs? How can we welcome and deploy tens of millions of people to find dignity and purpose in a lifetime of fulfilling work and creative arts?
This may seem like a wild proposition when so many small nonprofits are worried about more immediate concerns like replacing that major donor who left the fold, the unexpected influx of clients in dire need of services, or the reliable event space that just doubled their fee. I believe these struggles arise because the sector has been ahead of its time, its innovation under-appreciated and thus under-invested in. As AI transforms our economy, our human-centric values-driven organizations will come to be seen as early prototypes of how sustainable enterprises that preserve and promote human dignity will be built in a post-AI world.
Thanks for reading this edition of The Process, a free weekly newsletter and companion podcast discussing the future of technology and philanthropic work. Please subscribe for free, share and comment.
Philip Deng is the CEO of Grantable, a company building AI-powered grant writing software to empower mission-driven organizations to access grant funding they deserve.