Welcome to this edition of The Process, a free weekly newsletter and companion podcast discussing the future of technology and philanthropic work.
If you enjoy reading and listening, please consider subscribing and sharing with your community!
Thank you!
Philip
Last week I attended a major conference for grant professionals and came away with a surprising conclusion — the sector needs to focus less on the ethics of AI in grant-seeking. Across numerous conversations I’ve had with thousands of grant professionals over many months, I have yet to encounter a novel or specific scenario posed in which generative AI creates a new ethical threat that existing codes of conduct do not already cover. It is and always will be unethical to lie or mislead when seeking grant funding, and AI does not change this. We are still responsible for our words and actions.
To be clear, ethical conduct is foundational to a healthy grants sector, which distributes more than $800B annually based largely upon on mutual trust between grantors and grantees. It is precisely because ethical behavior is essential for this special kind of financial transaction to take place that such robust codes of ethics (Grant Professionals Association Code of Ethics) exist within the profession.
The law of the instrument, sometimes called the law of the hammer and attributed to Abraham Maslow goes, "If the only tool you have is a hammer, it is tempting to treat everything as if it were a nail." I believe the grants sector is wielding ethical codes a bit like the hammer in the analogy, which is why the profound implications of generative AI are primarily being viewed as potential ethical quandaries.
Anyone who plays with ChatGPT for a few minutes comes away with an experience of how powerful this technology is, and so it’s natural to reach for safety systems to cope with discomfort and uncertainty. Let’s take a closer look at some of the most frequently cited ethical concerns I’ve encountered.
If I use AI in grant-seeking, is my data being shared with other people?
If I use AI in grant-seeking, is my data being shared with other people? Versions of this question come up all the time and the answer is — it is extremely unlikely your data will be shared without your knowledge. When it comes to standard security concerns, reputable AI systems are no less secure than common workplace software we’re all using. For instance, ChatGPT is largely hosted on Microsoft servers like the ones hosting Word, Excel, and Outlook, which are all highly secure and do not co-mingle user data.
As an aside on how large language models work, even if your grant proposals ended up in a training data set, generative AI systems do not produce outputs by citing or referencing information from their training data, instead they make mathematical predictions about each next word that should follow your prompt based on the patterns they have observed across all the text in the training data. The more specific and unique a piece of writing is, the less likely an AI model is to recreate it for someone else.
So, what does information security mean in the age of AI? It means largely what it did before ChatGPT took the world by storm. We should use highly secure passkey systems on all of our accounts across the internet, we should know the companies who make our technology, and understand their data use policies.
For example, if you use the free version of ChatGPT, you should understand you’re engaging with OpenAI, a highly reputable AI research firm that is closely aligned with Microsoft, your interactions can be used to improve the user experience, and potentially in future model training. You can opt out of this. Paid versions of ChatGPT make data sharing an opt in agreement instead of opt out, and companies like my own, which interface with GPT via API, the policy is not to use the data for product improvement or model training.
Is using AI a form of plagiarism or cheating?
Let’s look at another question I hear often — Is using AI a form of plagiarism or cheating? The answer is — no. Utilizing assistance, whether in the form of spellcheck, Grammarly or hiring a professional grant writer, has never been considered cheating or untoward in grant-seeking, and using generative AI is no different. Instances of cheating and plagiarism in grant-seeking always include some form of willful deception and/or negligence, and people can behave badly with or without AI. With AI, problems are most likely to arise from negligence when folks move too fast and quality control suffers. Integrity and careful proofreading are the human-centered solutions that have long been written into the sector’s code of ethics.
Another digression, where there is both concern and active litigation regarding intellectual property and large language models is with authors who are suing the creators of dominant models because their work, which is widely available online, has been scraped and included in training data. What is being litigated here is not the use of publicly available content online, it is that this content has been used by the models to allow other people to mimic the particular writing styles of these authors. I strongly caution against grant professionals prompting generative AI to write grant proposal content in the style of any particular person other than yourself and/or the applicant organization.
Should I tell funders I’m using AI and what will they think about using AI assistance on grant applications?
The final concern I hear often is, Should I tell funders I’m using AI and what will they think about using AI assistance on grant applications? My friend, Vu Le, author of NonprofitAF.com beat me to the punch here. Vu writes,
Whether funders like it or not, AI is here, and it will be used to write grant proposals. It will save many organizations a lot of frustration and grief that often come with traditional grantmaking practices that have been inflicted upon the sector over decades. Funders can use their energy to resist this, or use this as an opportunity to reassess inefficient and inequitable granting processes and work with nonprofits on a more meaningful level to tackle systemic issues plaguing our communities.
Vu Le - NonprofitAF.com
In the early days of the pandemic, grantors across the country slashed the length of their applications, removing extraneous questions and even going as far as to accept *gasp* grant proposals from other funders as valid applications for emergency funding. Grant proposals still take far too long to complete and funders should be invested in ways to alleviate the burden on applicants. I am certain some grantors will take an anti-AI stance, but I predict it will be extremely difficult for them to justify such a position.
I would also caution any funders reading this to avoid using any form of AI detection software because they simply do not work. The most reliable outcome theses “detectors” achieve is not correctly identifying AI-written content, rather it is creating embarrassing news stories about the calamities they cause by consistently misdiagnosing human writing as artificial.
There is one ethical question that is not coming up enough.
There is one ethical question that is not coming up enough in the grants sector’s discussions — how do we justify primarily servicing the largest, most well-resourced organizations who can afford our services, which deepens grant funding inequity overall? The answer is — we can’t. 1% of grant applicants take down 50% of grant funding each year, and 90% of applicants do not work with external grant experts.
At the conference, only a few of us were people of color, and yet I noticed in discussion after discussion, black and brown folks in particular were raising their hands to say how excited they are that generative AI can be used to help level the grant-seeking playing field. They understand how AI can help code switch writing into foundation-speak so certain applications are no longer penalized for differing styles of prose. Staff at the most persistently under-granted organizations see how by using AI to write a proposal more efficiently they might just be able to find enough time to eek out an application for funding that could fundamentally change the trajectory of the organization for the better, instead of hoping they can get to it next cycle.
The way I see it, grant writing has held all of us back, including grant professionals. When I was a grants consultant, I constantly felt torn between my desire to pay my bills by seeking large anchor clients who could pay my full rate, and wanting to serve more small organizations who couldn’t afford my services. If I were still consulting today, I’d be thinking about how to use AI to lower my costs and expand my client capacity to both make more money and serve a wider, more equitable swath of my community.
I would create affordable service packages that provide clients with grant readiness appraisals, processes to build strong foundational proposal content, training for how to use safely use AI tools to leverage their content to write proposals themselves, techniques to prospect and identify strong opportunities, editing and proofreading support, guidance on storytelling with data, strategic consulting, program building, and relationship cultivation.
So many grant professionals I’ve spoken with have said they wish they could spend less time actually writing, and more time teaching grant readiness, strategy and sustainability practices. Generative AI tools make this immediately possible, and these services can and should be offered much more widely and affordably to build equity across the sector.
AI will be misused to cause great harm in the world. We are so fortunate that the format of our work as grant professionals is both text-based and deeply human-centered and relational, meaning it is far less susceptible to malicious usage of AI. What’s more, generative AI dramatically eases many of the most burdensome tasks in our profession, thereby increasing our efficacy as grant professionals, which accelerates the work of do-gooders in the world, some of whom are working to counteract unethical uses of AI.
Generative AI will radically transform the grant-making system and those of us operating in it have an enormous opportunity to shape the transformation to address deep, systemic, long-term inequities that have plagued the sector for decades. As much change as is on the horizon, the grant professionals’ code of ethics will hardly need amending because the values at its foundation will only become more important as AI reshapes the world. There are profound ethical questions we must grapple with as grant professionals in the age of AI, but those questions have far less to do with how we put words on a page, and far more to do with why we do it.
Thanks for reading this edition of The Process. Please share and comment or you can email me at philip.deng@grantable.co
Philip Deng is the CEO of Grantable, a company building AI-powered grant writing software to empower mission-driven organizations to access grant funding they deserve.
THIS IS A MUST-READ!
Aw man, I hate ALL CAPS, but this thoughtful and thought-provoking article deserves it.
I fully and completely agree with every point. Thank you, Philip, for articulating what may be controversial points, but in fact are ones that should be debated and then embraced.
Well stated and I’ll be quoting you often (amidst edits to my ChatGPT-generated prose 😮)
I agree this is a very thoughtful approach to the ethical dilemma of AI in grant writing, but as a seasoned grant writer, I am concerned about the growing AI-gulf between savvy users and those of us traditionalists who are not early AI adopters. As a one-person office, it is hard enough to compete against the large R1s with their offices full of specialists and graphic designers churning out polished proposals on a daily basis - now I have to compete against computer AI bots as well. Time to retire!