Welcome to this edition of The Process, a free weekly newsletter and companion podcast discussing the future of technology and philanthropic work.
If you enjoy reading and listening, please consider subscribing and sharing with your community!
Thank you!
Philip
AI can’t write with emotion. I’ve read versions of this declaration from a number of groups I follow, who are trying to help calm fears of AI upheaval. If by this statement they mean, AI cannot incorporate its own emotional experiences when writing — this is true. If, however, they are saying, AI cannot write in ways human readers find emotional — this is not true. Even without experiencing emotions, AI can write in ways that move us.
I’m pushing for more precision here because I believe the well-meaning effort to help calm people’s fears of being replaced by AI depends upon drawing accurate boundaries between what humans and autonomous systems can each do best, so that people can adapt and evolve their roles accordingly. An analogy would be a land survey that inaccurately portrays a floodplain, giving people a false sense of security to build where it is not safe from flooding, whereas an accurate survey allows people to establish themselves in a secure place.
AI’s lack of emotional experience or empathy when generating text is being conflated with a lack of emotionality or empathic quality in the writing it produces. GPT4, for instance, can lack emotions, and its writing can be powerfully emotional — both can be true at the same time.
For example, I gave ChatGPT the following fictional prompt:
Please write a brief condolence email to a childhood friend of mine called, Natasha, whose father, Peter just passed away after battling cancer. Note the good times we had together with Peter, especially his love of playing guitar and singing for us on the porch of their house.
To which it replied:
Hey Natasha,
I just heard the devastating news about your dad, Peter. I am so sorry for your loss. Losing a parent is incredibly tough, and I can't even begin to imagine what you're going through right now.
I wanted to reach out and let you know that I'm here for you, no matter what. Your dad was such an amazing person, and I have such fond memories of him playing the guitar and singing on the porch of your house. Those were truly special times, and I feel lucky to have experienced them with you and your family.
If there's anything I can do to help or support you during this difficult time, please don't hesitate to let me know. Remember that you have a network of friends who care about you and are ready to lend a hand.
Please accept my deepest condolences, and know that my thoughts are with you and your family. Take care of yourself, and remember that your dad's love and music will always be with you.
Sending you love and strength,
To be clear, I’m not saying we should all go out and delegate the most important emotional work of our lives to GPT language models. I’m only trying to illustrate how these models are capable of writing in a ways that are moving and responsive to human experience.
I noticed the chatbot did not mention cancer, specifically, which I think has empathetic value by not directly rehashing what was likely a very painful experience for the fictional family members. The letter also does not presume the sender knows or fully comprehends what the recipient is feeling, another hallmark of good condolence etiquette. The last line, which weaves together the timeless quality of both love and music, is a remarkably poetic and effective bit of writing.
How we relate to artificial intelligence
For now, most people do not feel emotional connection with large language models like GPT, which means it will be difficult for people to consider the models, themselves, as being empathetic to human experience. Consider if the condolence note above were to come directly from an artificial entity, instead of Natasha’s childhood friend, it would likely not be received in the same way by Natasha. It’s important to underscore how much the identity of the sender and human-centered oversight, matters.
Notably, there are some people who are developing strong feelings for generative AI personas. A woman, Rossana Ramos, made headlines earlier this year by marrying her AI husband. To get a better idea of how this story may have unfolded, we can consider the prescient 2013 film, Her to help us imagine how someone might become deeply emotionally attached to an artificial persona.

While scenarios like Rossana’s are considered fringe cases for the time being, we should expect that to change and situations like this to become more common. It’s difficult to predict how far the Overton window will shift in terms of recognizing the social legitimacy of emotional relationships with AI, nonetheless these cases represent clear evidence that humans are capable of relating emotionally to AI, if not the other way around.
Far from being unable to move us emotionally, I think people using AI will be able to produce many forms of content that are extremely compelling, and they will be able to do so at scale and speed. This means the real risk is not sub-standard quality or lifeless AI-generated content, but rather the risk is irresponsible users who are able to use highly impactful information to cause much greater emotional harm and manipulation.
So where is the real floodplain boundary?
Where AI systems struggle is finding signals within the noise of real-time, real world, rich context. Look around you and contemplate the astounding and infinite amount of dynamic situational information you can perceive and understand effortlessly at any given moment. Right now, what are you seeing, hearing, smelling, tasting, touching, thinking, and feeling? All of it is constantly changing.
The density and fluidity of this information is extremely difficult for AI systems to make sense of. Layer on top of that the limitless complexity and nuance of interpersonal dynamics, and we begin to see the “high-ground” where AI frequently stumbles trying to match human ability. For instance, even the most mundane email in our inbox has an incredible amount of context around it. Is this spam, and why do I think so? Why has this person’s tone changed? Is it too soon to follow up? What ideas are most compelling? “Hello”, “Hi”, or “Hey”?
As we adapt to the societal transformation brought on by AI, we must carefully evaluate any boundaries being set forth by as hard lines that separate humans and autonomous systems. In particular, I’m skeptical of any supposed limits to the quality of generative AI outputs, in large part because quality is often a subjective measure and because these systems are improving so rapidly. Over a relatively short period of time, I believe many forms of AI generated content will surpass typical human performance, even according to human critique. A useful aphorism might be: If we do it on a computer, a computer can probably do it.
Our focus should be on all the things we do in our nonprofit sector work that happens beyond the purview of processors and predictive models, which is the vast majority of what we do. Redoubling our efforts to cultivate healthy organizational culture, to build relational, trust-based partnerships with stakeholders, and to do all of this with strong values alignment — this work, the hardest work, will not be touched by AI anytime soon. In fact, these time-saving tools may eliminate one of our main excuses for neglecting human-centered priorities, which is that we don’t have enough time for them.
For the last 20 years, to make software work, humans have had to click and strike keys in repetitive patterns for hours on end, and this has had a profound effect on our identity as workers. Our own “technification” shows up, for example, in our language when we talk about how much bandwidth we have, or that we’ll get a download from the team later. Modern AI systems are in the process of taking back many of these rote operations. Part of the discomfort we’re experiencing about the future of work may actually come from the realization that familiar and comfortable identities are being peeled away from us, revealing our selves and our ideas for all to see.
As the lines between humans and machines are being redrawn, our identities will shift from a focus on what we produce, to what we perceive. When it comes to emotion, the uniqueness of our humanity is not in our ability to move others, but in our capacity to be moved.
Thanks for reading this edition of The Process. Please share and comment or you can email me at philip.deng@grantable.co
Philip Deng is the CEO of Grantable, a company building AI-powered grant writing software to empower mission-driven organizations to access grant funding they deserve.