Welcome to this edition of The Process, a free weekly newsletter and companion podcast discussing the future of technology and philanthropic work.
If you enjoy reading and listening, please consider subscribing and sharing with your community!
Thank you!
Philip
If ever there was a "gentleman athlete," Bob Erdman would be the man.
Coach Bob was my first basketball coach and is one of the most influential teachers of my life. He taught me many fundamental principles that served me both on and off the hardwood.
His players learned to be sharpshooters by following B.E.E.F., which stands for Balance, Elbow, Eyes, and Follow-through — a silent checklist I’ve used thousands of times in pursuit of that eternally satisfying swoosh. Nothing but net.
When we trained in the gym, Coach would call out each phase of B.E.E.F. and all of us would move through them in unison. On Eyes, he would tell us to imagine a dime balanced on the front of the rim and our job was to ever-so-gently brush it off with the ball on its way through the hoop.
As he put it, “Aim small, miss small.” His point was intentionality can mitigate human error. By focusing your shot on a tiny target, even if you miss the imaginary dime, your shot may still go in because your aim was so precise.
This wisdom worked for my shooting percentage, and it applies now when I think about how the philanthropic sector should think about AI. We need a proactive vision of what the ideal future looks like with a high degree of detail and specificity. We need our dime to aim at.
The less we define what a positive AI future looks like, the more room we leave for unintended harm. Given the awesome power and potential to use large language models, like GPT, for both good and evil purposes, we must aim for outcomes that are as detailed as possible so when we do miss the mark on occasion, the consequences are manageable.
For example, it’s likely AI tools will save us a lot of time in certain situations. An imprecise future vision might be, “We want to use AI to save staff time and resources.” Clank. Lots of harmful unintended consequences are likely if this is the extent of planning for AI.
Instead, a “dime” to aim at sounds like, “According to our value of individual and community wellness, we will use AI tools to save staff time and resources to implement a phased shift to a 4-day work week, while continuing to expand and deepen the efficacy of our work.” Swoosh. With a clear understanding of why they are integrating AI into the workplace, this team likely emerges with improved outcomes and morale.
This post was partly inspired by a conversation had I had last week with Meredith Noble, Co-founder and CEO of LearnGrantWriting.org, who captures this mindset by asking the question, “What could go right?” The fact that this question sounds a bit odd to the ear speaks to our inherent negativity bias. Meredith recognizes that most of us are far more accustomed to asking, “What could go wrong?”, and we need to prompt ourselves to think differently than what is habitual.
So, to help us retrain our brains, here’s my reinterpretation of B.E.E.F. as a guide to achieving precise and positive AI outcomes:
Balance. To act from a strong emotional foundation, healthy doses of both caution and curiosity set us up to see clearly and move with conviction.
Elbow. This step is about values alignment, and AI usage should always be aligned to achieve human-centered outcomes.
Eyes. Imagine down to the details. How will AI show up in the office? In the inbox? In the budget?
Follow-through. Commit to a balanced, human-centered, and specific strategy each time you engage with AI. Set yourself up for the next go, learning from the outcome of the previous.
If you want something a bit more, eh, beefy, I suggest checking out The Smart Nonprofit, a book by nonprofit tech experts Beth Kanter and Allison Fine, which lays all this out in much more data and detail.
One of the main reasons to reimagine B.E.E.F. in this way is because, like shooting a basketball, learning to use AI skillfully to reach the most positive outcomes for society will require a lot of repetition. It’s not one huge project with a due date. Facility with AI concepts and tools will come from thousands of micro-lessons each of us will experience in the months and years to come. From practicing prompts with ChatGPT to evaluating the ethics of different AI companies, each time we interact with this extremely important technology is an opportunity to reduce fear and build control over it.
As I’ve written about in previous posts, I’m pushing hard for the philanthropic sector to engage with AI technology because of how challenging it will be for the world to align it with human-centered outcomes. This is because the individuals and companies primarily driving its development are in the private sector where generating profit is often in tension with human-centered priorities.
Another gem from Coach Bob is, “Practice doesn’t make perfect, it makes permanent.” If you practice shooting free throws the wrong way a thousand times, all you’ve done is cement poor technique. We are at the very beginning of our journey with AI, which is the time to help the world build the right habits and expectations for the way we use it.
Our human-centered sector must work to develop and share our understanding of the technology, how to use it to advance righteous causes, and how to advance norms, policies, and case studies for the effective use of AI tools.
Thanks for reading this edition of The Process. Please share and comment or you can email me at philip.deng@grantable.co
Philip Deng is the CEO of Grantable, a company building AI-powered grant writing software to empower mission-driven organizations to access grant funding they deserve.