AI and openness

Exploring commons-based approaches to machine learning

The release of powerful machine learning models under open licenses was a major event in the AI/ML development space in 2022. Until then, large generative models such as GPT-3 and Dall-E were seen as a force that would concentrate digital power in the hands of a few corporations. The release of the Stable Diffusion image generation model (and other models like BLOOM and Whisper) marked a significant change.

This was a breakthrough moment for the world of open, indicating the emergence of a new field in which the principles of open are applied. This is a nascent field in which there are still no established norms for openly sharing different elements of the machine learning stack: data, model, and code. Moreover, a new norm for sharing has emerged, expressed in a new suite of RAIL licenses that aim to combine an open licensing model with rules for responsible use.

By early 2023, it became clear that the emergence of generative AI would re-ignite copyright debates, which free culture and access to knowledge advocates had been involved in for the past two decades. Until then, public discussion about the potential harms of AI systems had focused on issues such as bias, disinformation and threats to privacy. Now, the list must include the issue of creators’ rights and rules for the reuse of creative works. This is a conversation that is familiar to open movement activists, but one that needs to move beyond its traditional framing. It is essential to understand how to balance creators’ and users’ rights in a context where creation is automated and reuse occurs in new ways.

Our research seeks to contribute to this public debate and to the emerging field of open and commons-based approaches to machine learning. We are particularly interested in the commons-based governance of datasets and models, the impact of generative AI on creativity, and the emergence of new licensing models that balance openness and responsible use.


Helberger and Diakopoulos on the AI Act and ChatGPT
Natali Helberger and Nicholas Diakopoulos have published an article titled "ChatGPT and the AI Act" in the Internet Policy Review. The article argues that the AI Act’s risk-based approach is not suitable for regulating generative AI due to two characteristics of such systems: their scale and broad context of use. These characteristics make it challenging to regulate them based on clear distinctions of risk and no-risk categories.

The article is relevant to us in the context of open source, general-purpose AI systems, and their potential regulation.

Helberger and Diakopoulos propose looking for inspiration in the Digital Services Act (DSA), which lays down obligations on mitigating systemic risks. A similar argument was made by Philipp Hacker, Andreas Engel, and Theresa List in their analysis.

Interestingly, the authors also point out that providers of generative AI models are currently making efforts to define risky or prohibited uses through contractual clauses. While they argue that “a complex system of private ordering could defy the broader purpose of the AI Act to promote legal certainty, foreseeability, and standardisation,” it is worth considering how regulation and private ordering (through RAIL licenses, which we previously analyzed) can contribute to the overall governance of these models. announces to have collected opt-out requests for 80 million artworks.
According to the announcement, 40,000+ individual artworks have been opted out from use for ML training via the tool. The remaining 79 million+ opt-outs were registered through partnerships with platforms (such as ArtStation) and large rightholders (such as Shutterstock).

These opt-outs are for images included in the LAION 5B dataset used to train the Stable Diffusion text-to-image model. Stability AI has announced that the opt-outs collected by and made available via an API will be respected in the upcoming training of Stable Diffusion V3.

As we have previously argued, such opt-outs are supported by the EU's legal framework for machine learning, which allows rights holders to reserve the right to text and data mining carried out for all purposes except academic research undertaken by academic reserach institutions. is the first large-scale initiative to leverage this framework to offer creators and other rights holders the ability to exclude their works from being used for machine learning training.
Generative AI and the Digital Commons
The Collective Intelligence Project has published a new working paper by Saffron Huang and Divya Siddarth that discusses the impact of Generative Foundation Models (GFMs) on the digital commons. One of the key concerns raised by the authors is that GFMs are largely extractive in their relationship to the Digital Commons:
The dependence of GFMs on digital commons has economic implications: much of  the value comes from the commons, but the profits of the models and their applications may be disproportionately captured by those creating GFMs and associated products, rather than going back into enriching the commons. Some of the trained models have been open-sourced, some are available through paid APIs (such as OpenAI’s GPT-3 and other models), but many are proprietary and commercialized. It is likely that users will capture economic surplus from using GFM products, and some of them will have contributed to the commons, but there is still a question of whether there are obligations to directly compensate either the commons or those who contributed to it.In response, the paper identifies three proposals for dealing with the risks that GFMs pose to the commons.
In response, the paper identifies three proposals for dealing with the risks that GFMs pose to the commons. Read the full paper here:
Notes on BLOOM, RAIL, and openness of AI
The launch of BLOOM, an open language model capable of generating text, and the related RAIL open licenses by BigScience, together with the launch of Stable Diffusion, a text-to-image language, shows that a new approach to open licensing is emerging. In Notes on BLOOM, RAIL, and openness of AI, Alek outlines the challenges to established ways of understanding open faced by AI researchers, as they aim to enforce their vision of not just open, but also responsible AI.