What I've Learned From Speaking in Dozens of Nonprofit AI Webinars
How Nonprofits Are Poised to Take on the Challenge of Technology Leadership
Welcome to this edition of The Process, a free weekly newsletter and companion podcast discussing the future of technology and philanthropic work.
If you enjoy reading and listening, please consider subscribing and sharing with your community!
Thank you!
Philip
I wish we could get the entire nonprofit sector to attend a one hour webinar about the implications of AI for our sector. We’d need a Zoom room capable of accommodating the approximately 12 million people working in purpose-driven roles across the country and probably a week to get everyone muted or unmuted correctly, but it would be worth it to have us all on the same page about what will likely be the fastest, and potentially most transformative technological revolution in human history.
For now, I’m taking on the task in small bites with two or three regular-sized webinars each week during which I have the opportunity to engage with nonprofit folks in conversations about artificial intelligence. Our hosts, who represent all kinds of sector collectives and roles, usually ask if I have standard presentation or agenda, to which I always reply my preference is to introduce myself briefly and well enough so people understand where my experience comes from, and move as quickly as possible to audience Q&A.
Nonprofit professionals are paying attention to AI
People are extremely interested in this subject and I have yet to encounter a session that has run out of good questions and things to talk about. Over the course of meeting thousands of people in these kinds of virtual events, a picture has emerged in my mind, a composite of how the nonprofit sector is processing the rise of artificial intelligence, and this is what it looks like.
Despite AI being on everyone’s mind, a minority of nonprofit sector professionals have actually taken generative AI software for a spin. Several of the sessions I’ve been part of have polled attendees and the average proportion of folks who report having tried, or are actively using AI tools in their work is usually somewhere between 20% and 40%. Of these people, nearly everyone is referring to ChatGPT. At first, I was surprised by the relatively low percentage of AI users, especially for self-selecting groups interested enough to join a webinar about AI. Over time the number has begun to make more sense to me.
A good way to understand current nonprofit sector attitudes about AI is to reflect on the first hour of a middle school dance. Do they still have those? As the lights go down and the music starts, there are a few people, you know who they are, who are ready to get down from the get go, and these people represent AI early adopters. They’ve got a ChatGPT Pro account, they’re using it across their work, and are keen for AI integrations to augment the other tools in their tool kits. They have no hesitation about publishing the content they’re creating with AI help, and are ready for more.
Next, the largest group, are kids who want to and will eventually dance, but need a bit more time to feel things out. This also feels accurate to describe how most nonprofit professionals are feeling about AI. They’ve tried ChatGPT, or watched someone else demo it for them, and while the capabilities of the chatbot are obvious, its implications are less clear and perhaps a bit unsettling. Can I trust AI? Is this stuff ethical? Is it cheating to use AI? Will AI take my job?
Lastly, there is a group that just isn’t into dancing, which doesn’t mean they’re not having a good time, it only means shaking tail feathers isn’t for them. Likewise, in our sector, there are those who for one reason or another have decided to pass on AI. It’s important to note, no one needs to defend this choice. At the same time, like a school dance, these are formative times we’re in the midst of, and being out of step will have real implications for understanding and adapting to changes on the horizon.
We’re on the same team
Whoever you are at the dance, we all need to remember that everyone deserves to be safe and happy. I’ve advised AI enthusiasts, the dance floor dynamos, on specific ways to slow down and improve the safety of their practices. For example, I frequently caution against using any current generative AI tools, including ChatGPT Pro, as a research tool because of how easily fictitious data can slip past us.
On the other hand, I’ve also called in skeptics who I believed were being too dismissive of the capabilities of generative AI, to make sure they understood the implications for their work regardless of whether or not they personally choose to use generative AI. AI can’t write with emotion, is a declaration I push back on because it misleads people to underestimate the quality of writing leading large language models are capable of producing.
Where we tend to spend the most time in these webinars is on clarifying misconceptions and drawing boundaries between how traditional software and generative AI work. While almost everyone is now aware of AI hallucinations, the tendency of generative AI systems to produce non-sensical or incorrect data in the course of normal operation, almost no one understands how or why this happens.
A new mental model for the new risk profile of AI
To understand generative AI, many people seem to be using mental models of how traditional software functions, which is leading to misunderstanding. For instance, people often share with me the unsettling experience of prompting ChatGPT and receiving outputs that contain a mixture of real and false information, and they often draw the conclusion the software is performing this way because of poor quality or design. The reason for poor performance seems obvious to them, especially those who have heard that large language models are trained using huge swaths of internet content, and they assume the chatbot must be something like a shoddy search engine that speaks well, but stinks at selecting the right sources or verifying their accuracy.
At this point in the conversation many are fascinated to learn the AI they’re interacting with is nothing like a search engine, that it’s not referencing data of any kind, and is a closer cousin to autocomplete than it is to Google. A lightbulb moment often occurs when I share the following scenario:
If you ask ChatGPT, what is 4 + 4 ,
it will probably give you the correct answer, 8, but this is not because it has performed any arithmetic. It gives you the correct answer because the sequence 4 + 4 = 8 has likely appeared in various formats millions of times across its training data. The algorithm arrives at the correct answer because the correct answer follows the question so consistently in the data that an overwhelming statistical correlation is present.
If you ask ChatGPT, what is 342,646 x 134,868,
it will probably give you the wrong answer because this specific sequence of text appears rarely if ever across the training data. What is present, however, are many instances of six-digit numbers multiplied by six-digit numbers, followed by an eleven or twelve-digit number for the answer. The model will proceed to generate a number for the answer that looks plausible, but is highly unlikely to be the actual correct answer.
This example tends to help audiences create an entirely new mental model for how generative AI software, like ChatGPT, actually works, which is fundamental for using these tools safely. With the new mental model and understanding of how generative AI differs from traditional software, people come to one of the most important realizations of our webinars — the risk profile of generative AI is very different from what came before.
Just as we don’t need to worry that a car will suddenly decide to throw us from the drivers seat as a horse might, those in the past never had to worry about the safety implications of a horse going 100 mph. Likewise, understanding the dangers of generative AI and how to mitigate its risks require grasping how smart technology is fundamentally distinct from its predecessors.
Let’s look at data privacy, a concern I’m asked about frequently, and the questions usually goes something like, “If I ask ChatGPT to help me write a grant proposal, will it turn around and use my writing to help someone else write their proposal with my words?” Knowing more about how large language models work, by spinning up outputs word-by-word based on probability, and not by referencing specific source material, we can see how it’s very unlikely our exact words will find their way to someone else’s page.
Instead of data privacy, data ownership may become a far more significant topic of legal and cultural debate as large language models learn to approximate our particular styles of writing and thinking.
A new risk vector created by generative AI systems is speed, like in my horse and car analogy. I never had to worry about copy editing 1,000 pages of my own writing in a day simply because I can’t write that quickly. Moreover, as the author of every single word, I am de facto editing as I go when I write by hand.
With generative AI assistance, it’s conceivable I could write 1,000 pages in a day and a new risk arises because I’m not necessarily writing every word myself, and because there is so much textual area to monitor for mistakes. In this way, the speed of AI tools creates an entirely new landscape of risk and reward we must carefully study.
Let the games begin
At this point in the webinar, most people’s heads are beginning to spin, including my own. With a few minutes left, our hosts usually guide the conversation toward action items, next steps attendees can take to learn more about AI, if they’re interested. To me, the first and most important step has become clear, which is to assume a playful mindset.
It is well known play is not only something we do for enjoyment, but is also a powerful mode we switch into, which allows us to learn to do challenging things. Some of the activities people do for fun, like extreme sports and high-stakes games, involve a lot of risk, and yet play allows us to build facility and agility. As we improve, we not only become better at avoiding negative outcomes, but to more consistently reach desired ones. Play reframes failure from an intimidating final verdict, to a meaningful, yet temporary state. Playing is often a very social activity during which we build trust and relationships, as well as refine communication and intuition.
The good news is that playing with generative AI can be really fun. Take precautions to only use software from reputable companies, and then have at it. ChatGPT is a great brainstorming buddy, Tome can help with presentations, and Notion offers a wonderful AI-assisted writing experience, to name a few.
Ethan Mollick, a professor at the Wharton School of Business and fellow Substack author of One Useful Thing, is a leading expert on incorporating AI into education and he challenges his MBA students to do an ‘impossible’ assignment using AI. Students must set a goal of creating something they lack the skills to do entirely, and then to figure out how to do it anyway using generative AI resources. For example, students who have no experiencing coding, can choose to build a real, functioning app using tools like ChatGPT to help them code. I can’t think of a better or faster way to learn such useful skills.
After everyone has signed off, I almost always feel the hour spent together passed quickly. We end despite having more to say, looking forward to a sequel, and know there will be more to talk about in the future. These conversations feel different. The technology is different, the interest in it is different, and so is what’s at stake. Nonprofit folks are very used to taking on ‘impossible’ tasks, and it feels like our sector is starting to reach for a really important one — becoming technology leaders in the age of AI.
2023 Fundraising.AI Virtual Global Summit
Please check out the upcoming 2023 Fundraising.AI Virtual Global Summit taking place on October 23 and 24, 2023, which is free to register and I hope many readers will participate.
The Summit will virtually bring together more than 30 speakers to facilitate numerous tracks of thoughtful discussion, and for all of us to build this community as a force for good. Please join us!
Thanks for reading this edition of The Process. Please share and comment or you can email me at philip.deng@grantable.co
Philip Deng is the CEO of Grantable, a company building AI-powered grant writing software to empower mission-driven organizations to access grant funding they deserve.
Another wonderfully thought-provoking article ... thanks!
I'm just reading this article from Ethan Mollick and thought of you -- some really important insights. https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged