Welcome to this edition of The Process, a free weekly newsletter and companion podcast discussing the future of technology and philanthropic work.
If you enjoy reading and listening, please consider subscribing and sharing with your community!
Thank you!
Philip
Responsible, is rapidly becoming a popular prefix in the field of artificial intelligence, as in, “We need to ensure the use of responsible AI.” Thousands of earnest efforts around the world are taking place to help us mitigate harm and encourage beneficial uses of smart technology. A certain look and feel will likely emerge from these efforts, and we should prepare to be discerning, to ensure we’re not fooled by branding, which can, on one hand, serve to help us make responsible decisions more easily, and on the other hand, hide malevolence. This week’s post is my attempt to help create a few guidelines for helping to evaluate and utilize responsible AI frameworks.
People rarely have time to deeply research many of the products and services we use, and so we rely on visual shorthand to tell us what we need to know. Food certification logos allow us all to quickly make important purchasing decisions that directly impact our health and nutrition. What allows us to rely on these symbols is trust in the organizations behind them. When we choose a product or service based on a certification standard, we are placing our trust in a relatively small group of people at the certifying organization who we believe have done due diligence to ensure a certification is well earned.
Is there a visible and vibrant community?
Canvassing the people and organizations involved in creating a particular certification, and the entities that have been certified, can provide an enormous amount of information about the credibility of a given standard. If other brands you know and trust have enough faith in a particular credential to adhere to its guidance and display this choice prominently, that says a lot.
Even better, if there is an active and accessible dialogue among community members, this demonstrates vitality and power sharing, which can inform you about the general health of a community and framework. For example, if there is little or no pushback against a standard, it could mean its tenets are too easily met, a rubber stamp certification that requires no behavioral change.
On the other hand, if a certifying organization has facilitated agreement among a large group of stakeholders with differing priorities, this is a sign they’ve invested deeply in building trust and consensus, and the outcomes of this effort are more likely to be of value to you.
Will adopting a standard require behavior change?
Like most prophylactic measures, to choose to adhere to a responsible AI framework is to choose to incur some costs in time, energy, and resources to help prevent much larger harm and setbacks in the future. Doing things differently is the point.
A framework that doesn’t lead to any positive behavioral change isn’t a very useful framework, nor is technical compliance that evades the spirit of the guidance (see seatbelt picture above).
To remain on automotive analogies, driving on the right side of the road isn’t an act of virtue signaling, it’s a behavior that allows for us all to realize the most positive potential of traveling in cars. Likewise, frameworks for responsible AI should not be thought of as demonstrative, but as functional protocols for collective action, which will unlock the greatest benefits of smart software.
Is failure a possibility?
Whether or not it is possible to fall short of a standard is, to a large degree, a matter of its specificity. Perhaps the least specific call for responsibility I’ve written about was the following 22-word statement issued by the Center for AI Safety,
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Contrast that declaration with the precision of this next one from an initiative I’m involved in, The Framework for Responsible AI in Fundraising,
Fundraising AI Actors must protect personal and sensitive data by following robust security standards within our respective roles, maintaining compliance with relevant data protection regulations, and respecting the privacy of donors, beneficiaries, and stakeholders.
These principles should be part of all phases of the AI system lifecycle, including;
1. Consent,
2. Control over the use of data,
3. Ability to restrict data processing,
4. Right to rectification,
5. Right to erasure,
6. Adherence to privacy laws.
The statement issued by the Center for AI Safety is broadly relevant to humanity as a whole, while the guidance offered by the fundraising framework is most suited to fundraisers, of course. They exist on two different levels of a complex, layered, and overlapping hierarchy of responsible AI frameworks. The point is not to choose one or the other, but to find the right depth of focus so the frameworks you adopt are useful for your work.
While generally an undesirable outcome, a real potential for failure is a good indicator that you and your organization are going deep enough. Your work, AI frameworks, and AI technology are not static, they’re all constantly changing. This means the work to keep everything in alignment is on-going, a living process.
All of us fall short from time to time, and if that happens — own it and learn. Without the right responsible AI framework, we might not even be aware of failure in the first place, while having one gives us rungs to grasp immediately as we pull ourselves up to do better next time.
A Framework for Frameworks
Evaluating Responsible AI Frameworks:
1. Community Engagement and Credibility:
☐ Check the involvement of people and organizations in creating the certification.
☐ Look for brands and entities that you trust endorsing the framework.
☐ Seek out frameworks with an active, accessible, and diverse community dialogue.
☐ Look for frameworks with broad stakeholder agreement, indicating investment in trust-building.
2. Impactful Behavior Change:
☐ Evaluate whether adopting the framework necessitates real behavior change.
☐ Describe the implications of both technical compliance and following the spirit of the responsible AI framework.
☐ Consider whether adhering to the framework aligns with existing organizational values, or creates an opportunity to update them.
3. Embracing Potential Failure:
☐ Prefer frameworks with specific guidelines and principles.
☐ Look for frameworks that acknowledge the possibility of falling short.
☐ Assess the depth of the framework's focus, ensuring it suits your specific context.
4. Choosing the Right Framework:
☐ Match the framework's focus to your field or profession.
☐ Select frameworks that are values-driven.
☐ Prefer frameworks that address the range of phases within AI system lifecycles.
5. Aligning with a Trustworthy Initiative:
☐ Research and endorse initiatives that resonate with your values and goals.
☐ Participate in events, summits, and communities that promote responsible AI.
☐ Engage with like-minded individuals and organizations to collectively promote ethical AI.
The Framework toward Responsible AI for Fundraising
I’m thrilled to be deepening my involvement with an effort that inspired this piece. The Framework toward Responsible AI for Fundraising is,
[A] member-driven initiative supporting those working within the fundraising profession with the opportunity to collectively learn about Responsible AI, demonstrate their leadership around the subject, support best practices of Responsible AI applications, and support building a thriving charitable giving sector. The Framework for Responsible AI for Fundraising is intended to maximize the benefits of AI for fundraising purposes while minimizing the risk of damage to the hard-fought public trust of the nonprofit sector.
My company, Grantable, has endorsed the Framework and organizations interested in joining the movement, as well, can read and endorse here:
2023 Fundraising.AI Virtual Global Summit
It is my hope to also play a role in supporting the upcoming 2023 Fundraising.AI Virtual Global Summit taking place on October 23 and 24, 2023, which is free to register and I hope many readers will participate. The Summit will virtually bring together more than 30 speakers to facilitate numerous tracks of thoughtful discussion, and for all of us to build this community as a force for good. Please join us!
Thanks for reading this edition of The Process. Please share and comment or you can email me at philip.deng@grantable.co
Philip Deng is the CEO of Grantable, a company building AI-powered grant writing software to empower mission-driven organizations to access grant funding they deserve.