The AI Imperative: Nonprofits in the Digital Age
Predictions and advice for nonprofits navigating tech in 2024
Last September, I had the chance to visit the San Francisco headquarters of OpenAI as a member of the OpenAI Forum, an interdisciplinary group of professionals and community members assembled to help the company think broadly and deeply about the implications of bringing powerful forms of artificial intelligence into being. Wandering the room and feeling a heady mix of imposter syndrome and social anxiety, I worked up the nerve to start a conversation with an OpenAI team member and asked one of those vanilla conversation starters, “What do you all see on the horizon in the next five or so years?”
“Five?!” Apparently I had vastly exceeded the timeframe AI futurists feel comfortable speculating on. “More like one or two.”
One or two years until what? My friend did not elaborate before heading off to mingle with others, and I was left with the unsettling feeling that even here at the epicenter of the current generative AI revolution the endeavor was proceeding with a combination of terrifying speed and low forward visibility. It felt like riding in a fast car along a dark winding road with only a dim oil lamp to light the way ahead.
Others at OpenAI apparently felt similar concerns and tried to pull the emergency brake last November when the company’s nonprofit board voted to oust CEO, Sam Altman, only to find the handle they pulled was not really connected to wheels. Read my breakdown of those events here 👇
Prediction 1: AI will accelerate
My word for AI in 2024 is acceleration. I believe things are going to get much, much faster. Despite all the hubbub around AI in 2023, it was a relatively slow year for the tech sector, which saw big companies implement large-scale layoffs and lethargic startup investment caused by high interest rates.
As 2024 begins, economic indicators point to potential rate reductions, which should reignite startup investment and lead to cutting edge AI research breakthroughs, and companies bringing new AI software products to market. But the biggest factor I expect will drive faster and more head-spinning AI progress is increased competition between juggernauts like Google, Microsoft and OpenAI.
All of these companies have the key ingredients to build powerful large language models — talent, data, computing power, and money — and are highly incentivized to do so. They are in competition with one another, and many have also cited the need for U.S. tech companies to establish and maintain dominance in the field of artificial intelligence over perceived rivals like China. Multidimensional and multilateral competition creates an all-out race forward, but to what end?
Prediction 2: AI will meet you where you are
Companies like Google and Microsoft, in particular, are extremely well-positioned to roll out AI features and instantly get them into the hands of billions of people by integrating them directly into staple software programs most of us use on a regular basis.
I expect by the end of 2024, both Gmail and Outlook will have generative AI features that can help draft email correspondence in a style that matches your own. There’s no reason to believe that Word and Google Docs won’t also gain similar abilities. Microsoft recently announced, for the first time in nearly 30 years, it will be adding a new ‘Copilot’ key to its standard keyboard, which will call up a generative AI assistant it is creating to be a fundamental form of interface with its software products.
Irrespective of improvements in the performance of the underlying large language models like GPT-4, we will feel their impact more and more as their distribution improves.
Prediction 3: AI will be polarizing
Perhaps this isn’t much of a prediction at a time when is seems just about anything can serve as fodder for division, but I include it prophylactically so that we might all strive to preserve and protect nuance when things inevitably go a bit sideways.
A survey just released from the World Economic Forum holds misinformation and disinformation as the second most serious risk (after extreme weather) facing the world in 2024. The report reads, "The widespread use of misinformation and disinformation, and tools to disseminate it, may undermine the legitimacy of newly elected governments."
Unlike any technology before it, generative AI can simultaneously elevate the quality and sophistication of malignant content, while dramatically reducing the cost and time required to produce it. For someone with bad intentions, the world in 2024 will be rich with potential targets to exploit from geopolitical hotspots to the U.S. presidential election.
There will come a day when a calamity involving generative AI will make global headlines. Many will decide AI is the problem, but they will be pointing fingers at the wrong culprit. The real foe, AI systems powering content algorithms such as social media feeds, is the most powerful technological force tearing our social fabric asunder hindering our ability to solve big problems collectively.
Hyperrealistic misinformation is certainly a challenging issue to grapple with and generative AI supercharges the problem, but fakes have always existed and people learn to decipher them. Today, however, we are particularly vulnerable because of how divided we are. AI-powered deception is concerning, but AI-powered division is what worries me most.
What does it all mean for nonprofits?
We should all anticipate the continuation of what some are calling, AI vertigo, or the feeling of disorientation one experiences as both the incredible pace of AI development and the fractal implications of expanding AI capability leave us unsure of where we stand as human beings relative to these machines.
AI vertigo is not something to endure alone or in private, rather we in the nonprofit sector should focus intently on stoking open conversations to share information and perspectives on this incredibly important topic. We can all quietly pretend we’re not confused and concerned — the emperor’s new clothes method of change management — or we can get together and talk about it. Let’s talk.
Nonprofits should expect every piece of software they touch will soon have generative AI dimensions to confront and, hopefully, command to the great benefit the missions they serve. Modern software, as a category of tools, are not static, they are dynamic continuously evolving machines that change with the times. Even when software interfaces remain mostly unchanged, this can belie profound shifts in how they work under the hood.
And when things do go wrong, as they inevitably will, let us try our best to avoid reflexive finger pointing to say, “AI is the problem!”, because in doing so we may simply be doing the bidding of Facebook’s AI algorithm that is all about getting us to aim angry digits at each other. Instead, resist reductive rage and the urge to throw all your AI software out the window and let’s work together to find human-centered solutions to try to prevent additional tragedy.
Go slow to go fast
In my opinion, the nonprofit sector is moving far too slowly when it comes to engaging with generative AI. Across the tens of thousands of people I spoke with on the subject last year, I estimate only 10% have added generative AI to their workflows, 20% played around with it a bit, and 70% had no experience whatsoever. This pace is disheartening given the fact that the most astonishingly capable digital teammate ever conceived of by humanity has been available for 14 months, for free.
At the same time, I admire the seriousness and intentionality the nonprofit sector proceeds with, instinctively, when considering something as impactful and powerful as AI. It is second nature for us to first consider the potential risks and harms of a serious action before taking it. In contrast, the private sector, especially those with the most to gain financially from the advent of generative AI, are proceeding with reckless abandon.
How can we as nonprofits continue to be thoughtful and deliberative and learn about this transformative new technology with much more urgency and confidence? I don’t have the answers yet, but I see countless examples all around us that demonstrate how establishing safety systems can enable speed. If we take the time to ensure that we’re buckled in with helmets on, I hope we nonprofits can step on the accelerator too.
Thanks for reading this edition of The Process, a free weekly newsletter and companion podcast discussing the future of technology and philanthropic work. Please subscribe for free, share and comment.
Philip Deng is the CEO of Grantable, a company building AI-powered grant writing software to empower mission-driven organizations to access grant funding they deserve.
Great piece. As an OG who has been in the nptech sector for 30 years, moving slowly is not new. There have always been a small number early of adopters and the sector moving slowly to embrace tech. The AI is different because of its potential and power and because of the fast development. I would love to see more experimenting (low risk) and innovative thinking and doing.
Fantastic article -- I'm going to be sharing it a lot.
I spent some time working in a small private high school started within a nonprofit social service agency. I found the gap between the educators and the nonprofit staff regarding attitudes to technology to be HUGE. They all were working crazy hours for low pay, but the teachers (of all ages) were all over tech, and most of the agency staff were ostriches.
I'm so glad to be connected with folks who are trying to help our nonprofit pals "get it" when it comes to the potential of AI. Thanks for your thought leadership!!