AI is already out there. We need commons-based governance, not a moratorium

Opinion
March 30, 2023

Yesterday, the Future of Life Institute published an open letter titled “Pause Giant AI Experiments.” Its signatories ask for a six-month, immediate moratorium on “the training of AI systems more powerful than GPT-4”.

The letter has received criticism centered on how it frames the risks related to AI systems. The authors focus on speculative catastrophic risks and use narratives that are prime examples of criti-hype (a criticism that both feeds and feeds on hype as criti-hype). This state of mind is reflected in, for instance, the claim that “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds”.

The negative social impacts of AI systems are often much more mundane. Harms caused by AI will not be addressed by a moratorium on future AI development – but rather by better governance and regulation of existing AI systems and their development.

And to do that, we should shift away from a language of speculative risks to a conversation about real risks and society-centric technologies. The point is not to stop “digital minds”, but to build technological systems that benefit humans and the planet.

It’s telling that the letter focuses on models like GPT-4, which can easily be portrayed as alien, powerful “minds.” In our work on the AI_Commons case study we pointed to the need for better governance of AI training datasets. Although it is not as attention-grabbing as the conversation about creating “nonhuman minds,” experts have deemed it equally vital – see for example this recent proposal from MLCommons.

The letter suggests a simple dichotomy: you can keep AI research open or closed. In our exploration of AI governance, we propose a different approach based on the principles of commons-based governance. From this perspective, social harms are solved not by the technology’s containment but by its proper, democratic governance.

The section of the letter that emphasizes the importance of stronger AI governance includes some ideas that should be supported. However, many mechanisms have been omitted.

What’s missing are measures to ensure AI transparency, attribution, and traceability, many of which have been proposed by proponents of open AI development (some have been explored in the AI Deep Dive organized by the Open Source Initiative). If supporters of the ideas expressed in the letter are concerned about AI safety, they should ensure that the models (and underlying datasets, and other elements of AI systems) are available for scrutiny.

Further readings

The readings I would recommend on the issue include: “A Misleading Open Letter About sci-fi AI Dangers Ignores the Real Risks” (AI Snake Oil), “The Open Letter to Stop ‘Dangerous’ AI Race Is a Huge Mess” (Vice), and “Policy makers: Please don’t fall for the distractions of #AIhype” (Emily Bender on Medium).

Alek Tarkowski
keep up to date
and subscribe
to our newsletter
Subscribe