Welcome to this edition of The Process, a free weekly newsletter and companion podcast discussing the future of technology and philanthropic work.
If you enjoy reading and listening, please consider subscribing and sharing with your community!
Thank you!
As I observe people in my life using generative AI software for the first time, it seems our primal circuitry is signaling — get ready. Our threat-or-opportunity detection system is telling us to pay attention to this powerful new technology. For anyone wondering if AI is a fad or asking if AI is the new crypto, I think the answer is a clear, “No.”
“Crypto was money without utility,” [Thompson] argued, while tools such as ChatGPT are, “for now, utility without money.” Generative AI is “clearly something, even if one wants to argue that the thing it is is, for now, a toy,” he said.
For readers feeling anxious about the arrival of AI, welcome to the club! Our other members include the CEO of OpenAI, makers of ChatGPT, Sam Altman, who is concerned about the potential for “scary” AI on the horizon, and leading disinformation researchers who are worried about AI’s ability to create harmful content at an unprecedented scale. But are these the reasons why ordinary folks feel unnerved by AI when seeing it for the first time?
On an individual level, people are more worried about AI taking their jobs or making them irrelevant. They’re concerned about the ethics of using AI-generated content and where to attach accountability or credit when no human is involved in an outcome. At the periphery of these emotions, perhaps there is a tinge of shame at being challenged or even outdone by a machine, especially in a craft we may have spent years mastering. For visual artists whose artwork has been used to train AI models that now imitate their designs, anger is an understandable feeling, as well.
What was the point of learning to write well? Why did I spend so much time learning graphic design? If I make a widget and a machine can make more of them, better, faster, and at a lower cost than I can — have I lost part of my value and my identity? AI has entered a game that has only ever had human players, and we’re understandably uneasy about the competition.
An aphorism from the 90s is, “Don’t hate the player, hate the game.” The game, in this case, is a socioeconomic system, which defines our value as people based on what we produce, how much, how fast, and at what cost. As long as we measure our worth in large part based on productivity, we will be at risk of being benched in favor of more capable machines. We need to figure out how to change the game so it remains “human-centered in an automated world”, to quote nonprofit tech experts, Beth Kanter and Allison Fine.
Our AI-provoked anxiety is helpful in so far as it focuses our attention on the challenge at hand, but remaining anxious does not lead to the best outcomes. Emotions researchers have shown, while anxiety and excitement evoke similar physiological reactions in our bodies, they elicit very different psychological experiences in our minds.
It turns out that anxiety and excitement are physiologically very similar… Both anxiety and excitement are high arousal emotions, characterized by an increased heart rate. While the physiological experience of these two emotions is similar, they have incredibly divergent effects on performance. As we have all experienced before, too much anxiety harms performance, but excitement enhances it.
Both anxiety and excitement prime our systems to pay attention and prepare for action, but anxiety causes us to restrict thinking and overlook helpful options, while excitement tends to make us more optimistic, more attuned to positive possibilities, and feel we have more agency to influence outcomes. Science says we should try to reappraise our anxiety as excitement.
One possibility to be excited about is the rise of AI may accelerate the evolution of capitalism into a more sustainable and equitable economic system for more people and our planet. “No single technology in modern memory has caused mass job loss among highly educated workers. Will generative AI really be an exception?” writes Annie Lowrey in the Atlantic. Put bluntly, unlike previous technological breakthroughs, AI might disrupt the livelihoods of wealthy people. These people tend to hold more influence on policymaking and may (finally) be motivated to act now that it is in their interest to do so.
In a widely read 2021 essay titled, Moore’s Law For Everything, OpenAI CEO, Sam Altman, proposes a vision that anticipates changes wrought by AI and recommends radical new policies that would tax capital instead of labor to fund universal basic income (UBI).
As AI produces most of the world’s basic goods and services, people will be freed up to spend more time with people they care about, care for people, appreciate art and nature, or work toward social good. We should therefore focus on taxing capital rather than labor, and we should use these taxes as an opportunity to directly distribute ownership and wealth to citizens.
Caring for people, supporting arts and the environment, and working toward social good — sound like a sector we know? Is it irony or destiny that the nonprofit sector, for years dismissed as less important and less efficient compared to the for-profit sector, may be far and away more future-proof?
The difficulty of monetizing most nonprofit causes (which has driven many nonprofit executive directors to madness) may be what gives this kind of mission-driven work protection from automation. As AI labor rapidly floods throughout the private sector replacing human labor, nonprofit endeavors like caregiving, the arts, and environmental stewardship will become like high ground where people go for refuge. In this metaphor, altitude becomes a measure of human-centered value to society.
A new kind of anxiety and excitement arises when I imagine a future in which, say, The Nature Conservancy, not Walmart, is one of America’s largest employers. To entertain the idea that creating a world so good and so just is possible is terrifying because there is such a high likelihood we will fail to achieve it.
The AI revolution is the fourth such tectonic technological shift after the agricultural, industrial, and computational revolutions. It is underway and, like its predecessors, is unlikely to yield before it reshapes the world as we know it. To move from our current system, which precariously balances supply and demand on a fulcrum of scarcity to one grounded in a philosophy of abundance, I believe nonprofits must play a leading role. Realizing the best version of a post-AI world seems to depend on all kinds of human-centered movements becoming unimaginably successful, as well.
Thanks for reading this edition of The Process. Please share and comment or you can email me at philip.deng@grantable.co
Philip Deng is the CEO of Grantable, a company building AI-powered grant writing software to empower mission-driven organizations to access grant funding they deserve.
Free Webinar: Unlocking the Potential of AI for Nonprofits: Overcoming Fear with Curiosity
As Artificial Intelligence grows in usage in society, what does it mean for nonprofit and philanthropy? How should we feel about it? How would we use it? Can it take on some tasks like grant applications? What are the ethical and equity considerations? Will it take over our jobs?
Join us for this conversation with Beth Kanter and Allison Fine, authors of The Smart Nonprofit, Philip Deng the creator of Grantable (an AI-supported grantwriting platform), and Vu Le of NonprofitAF.com
Automatic captions will be enabled
Mar 14, 2023 09:30 AM PT / 12:30 PM ET