Everyone's in the Pool: Collective Action for AI Ethics
Why We All Need to Dive in So AI Doesn't Go off the Deep End
Welcome to this edition of The Process, a free weekly newsletter and companion podcast discussing the future of technology and philanthropic work.
If you enjoy reading and listening, please consider subscribing and sharing with your community!
Thank you!
Philip
From my little vantage point at the intersection technology and philanthropic work, I’m beginning to see some very positive signs of collective action to get ourselves organized at the speed of AI. At a high-level, I’m encouraged by people acting with urgency, with a grasp the importance of this emerging technology, and the potential of its impact. From Congress to the grassroots, it seems many have learned hard lessons over the last 20 years of our collective failure to mitigate the harms of social media, and we don’t want to repeat those mistakes.
Perhaps what we’re beginning to understand is that while most technology is neither good or evil, nor is it neutral. The key determinant of the impact technology has on society is the degree to which people cooperate to harness its benefits and mitigate its potential for harm.
Why is collective action important for AI?
A crude but effective analogy I heard used to emphasize the importance of collective action in certain critical scenarios, was shared by a professor at Duke commenting on different state approaches to fighting COVID,
States with looser social-distancing measures can create problems for those trying to adhere to them, the Duke professors said.
“It raises a lot of issues,” said Gunn. “The best analogy I heard is: ‘Well let’s just have one end of the pool that the kids are allowed to pee in.’
Similarly, in our digital world, there is no way to reserve a section of the pool for anything-goes AI activities. Smartphones have brought most of humanity online, which means most people can already access the most powerful large language models (LLMs) ever created, such as GPT4. Models are also rapidly shrinking in size, becoming more efficient to operate, which means many powerful LLMs will soon become portable, hosted on personal devices and customizable by the individual.
With so much power in all of our hands, and everyone subject to the same consequences, coordinated collective action at every level of society is imperative. The only way we will be able to make progress toward positive AI outcomes, and discourage malevolence, will be to have effective common policies the overwhelming majority of people abide by.
What has been done so far?
The first news-making call to collective action was an open letter issued by the Future of Life Institute proposing,
AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
This letter currently has more than 30,000 signatures and initially gained attention because of numerous notable signatories. However, the movement lost momentum as people judged a pause to be infeasible, and of questionable usefulness. How would we enforce such a ban? And if implemented, what, if any, kind of effective corrective action could be achieved in only six months?
While no such pause is likely, the attempt itself is useful as a sort of throat clearing exercise for our public voice. Shortly after this letter was published, on May 16, 2023, the Senate Judiciary Committee on AI Oversight and Regulation, heard testimony from three AI experts, Sam Altman, Chief Executive Officer of OpenAI, Christina Montgomery, IBM chief privacy and trust officer, and
, NYU emeritus professor. 🎥 You can watch the testimony here.The session was unusually bipartisan and collegial for the current tenor of U.S. politics, and the senators seemed genuinely interested to learn about generative AI, many having used ChatGPT to craft parts of their remarks. The witnesses offered some technical analysis, but mostly underscored the need for collective action, especially on the part of government and large tech firms developing the most powerful AI models. The group also suggested using other domestic and international frameworks used to regulate critical industries and powerful technologies as models to follow.
The last major rallying cry to make it onto my radar was the following 22-word statement from the Center for AI Safety,
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Notably absent from the pause letter, this time OpenAI CEO, Sam Altman, co-signed the statement, which was intentionally written to be concise, non-prescriptive, and provocative in order to maximize comprehension and attention.
Even this extremely brief declaration ignited debate. Some critics judged it to be alarmist and overblown. Others were skeptical of wealthy and powerful technologists asking to be regulated, viewing the statement as a melodramatic cry to “Stop me before I innovate again!” Or, perhaps they are simply building hype for their products. There is no perfect way to get the conversation going, but I am certain we must.
What can the rest of us do?
In recent weeks, I’ve been invited to participate in two projects seeking to help define AI codes of conduct and ethics recommendations for use in our sector, and on June 10, 2023, the Grant Professionals Association board of directors adopted a statement on AI. These are inspiring examples of smart people taking action get different communities organized. There is a lot of grassroots activity to take on these issues and plenty of room at the table for folks like us to be involved.
My hope is many such initiatives will take place, people will craft guidelines across sectors and for specific use cases, and then we’ll begin the work of organizing all of these frameworks in a hierarchy we can use to understand how the foundational principles of human-centered AI flow all the way down to the most precise applications in different fields of work.
Many of you have shared candidly that the speed, technicality and implications of generative AI are, quite frankly, a bit overwhelming. Count me in the same boat — this stuff is bonkers. Given how much most of us already have on our plates, I spent some time figuring out how to help us all ease into discussions of AI ethics and policy, while also giving folks a hands-on opportunity to engage with a form of generative AI.
Here’s what I came up with 👇
Philip’s call-to-action
Overview
Using the UNESCO Recommendation on the Ethics of Artificial Intelligence, I’ve created a generative AI chatbot which anyone can interact with and ask questions of by clicking the button below. I encourage readers to try the chatbot and ask it questions you have about AI ethics, and it will base its responses on UNESCO’s useful and human-centered document.
Ideas
Read the 44-page recommendation yourself and then ask the bot questions to help deepen your understanding
For example, “How does UNESCO define ethics?”
Ask the bot to summarize, rephrase, or restructure the information in ways that help you to better organize your thoughts
Ask the bot to play a quiz game with you based on the document and its concepts
Activity
In the comments below, please consider sharing any particularly thought-provoking experiences you have with the bot, including but not limited to insightful questions and responses, concerns and ideas for further investigation and dialogue.
Or you can engage with me on LinkedIn, and please have patience with my response time as I try to limit my time on social media as much as possible.
Also, feel free to share this article within your communities, especially where folks are grappling with AI ethics in work and life and would like a practical way to dip their toes in the pool. Happy swimming, y’all.
Thanks for reading this edition of The Process. Please share and comment or you can email me at philip.deng@grantable.co
Philip Deng is the CEO of Grantable, a company building AI-powered grant writing software to empower mission-driven organizations to access grant funding they deserve.
How'd you create the UNESCO bot?