AI and the Commons

Exploring commons-based approaches to machine learning

This line of work explores the intersections between AI and openness. The release of powerful AI models under open licenses was a breakthrough moment for the world of open, indicating the emergence of a new field in which the principles of open are applied. This is a nascent field in which there are still no established norms for openly sharing different elements of the machine learning stack: data, model, and code. Addressing this gap requires governance mechanisms that uphold open sharing while addressing power imbalances and safeguarding digital rights.

The argument for open sharing of AI components is familiar to open movement activists. Key governance debates revolve around datasets and model licensing. But there is a need to better understand the benefits and risks and to explore new frameworks for sharing. There is a balance to be struck between openness and responsible use.

Our work in this area is guided by the insight that commons-based models and approaches offer a solution to this challenge.

Timeline

Spawning has released PD12M, a fully open dataset consisting of 12.4 million image-caption pairs. The dataset exclusively consists of public domain and CC0 licensed images that have been obtained from Wikimedia Commons, a large number of cultural heritage organizations, and the iNaturalist website. From the paper accompanying the release:
We present Public Domain 12M (PD12M), a dataset of 12.4 million high-quality public domain and CC0-licensed images with synthetic captions, designed for training text-to-image models. PD12M is the largest public domain image-text dataset to date, with sufficient size to train foundation models while minimizing copyright concerns. Through the Source.Plus platform, we also introduce novel, community-driven dataset governance mechanisms that reduce harm and support reproducibility over time.
The release of PD12M is remarkable not only given the size of the fully open dataset but also because of the holistic approach that Spawning has taken. Via the source.plus platform, Spawning provides community-based governance mechanisms. In addition, the platform also provides an exemplary level of transparency regarding the sources of the images included in the dataset. The release of PD12M is exciting not only because it builds on our ideas for a public data commons but also because Spawning sees the release of the dataset as a first step towards offering a foundational public domain image model with no IP concerns, that will help artists to fine-tune, and own, their own models on their own terms.

Last week, Spawning launched source.plus, a platform for “curating, enriching and downloading non-infringing media collections in bulk for AI training.” This is a significant step in addressing a host of issues with AI training datasets, such as LAION or face recognition training datasets.

The aim of this experimental platform is to demonstrate that licensed content is not the only viable solution:

This means the most conscientious developers and most affected communities are often on the sidelines of this rapidly developing field, whereas these are the very groups that need to be steering its evolution, and they too should be able to benefit from participation with AI.

Its real value lies not just in the volume of aggregated media files but in how they are curated and governed. It is an interface to established collections, it introduces additional mechanisms – many of which we have proposed in our recent white paper, Commons-based governance of data sets for AI training. For example, source.plus is the first collection to offer an “opt-out” mechanism. Spawning is also planning to introduce value-sharing mechanisms, including paid collections of in-copyright works and a donation mechanism that supports cultural heritage institutions.

Researchers at Knowing Machines have published Models all the way down, a visual investigation that takes a detailed look at the construction of the LAION 5B dataset "to better understand its contents, implications, and entanglements.” The investigation provides detailed insight into the internal structure and strategies used to build one of the largest and most influential training datasets used to train the current crop of image generation models. Among other things, the researchers show that the model's curators relied heavily on algorithmic selection to assemble the model, and as a result…
…there is a circularity inherent to the authoring of AI training sets. [...] Because they need to be so large, their construction necessarily involves the use of other models, which themselves were trained on algorithmically curated training sets. [...] There are models on top of models, and trainings sets on top of training sets. Omissions and biases and blind spots from these stacked-up models and training sets shape all of the resulting new models and new training sets.
One of the key takeaways from the researchers (who, for all their critical observations, give LAION credit for releasing the dataset as open data) is that we need more dataset transparency to understand the structural configuration of today's generative AI systems, which is very much in line with what we’ve been advocating for in the context of the AI Act and will continue to push for in the implementation of the Act.  

Screenshot from Models all the way down, © Knowing Machines

A group of AI researchers coordinated by the French start-up Pleias wants to challenge the belief that you need copyrighted materials to train an LLM that competes with the models developed by leading AI companies. Yesterday, they released what has been dubbed the largest open AI training data set consisting entirely of public-domain texts. The collection is called “Common Corpus” and is available on Hugging Face for download. The resource is multilingual – besides English, it includes the largest open collections in French, German, Spanish, Dutch, and Italian, as well as collections for other languages.

Training data is a key resource for developing AI systems. Until very recently, it was commonly believed that LLMs, such as those behind popular services such as ChatGPT or Bard, could not be trained without relying on copyrighted content. If this is the case, access to high-quality data may continue to be a significant barrier for independent AI developers seeking to compete in the LLM market.

Datasets consisting only of public domain texts have significant limitations, the most important being that they miss more contemporary information because they are comprised of historical sources or older publications where copyrights have already expired. It remains to be seen whether public domain datasets can indeed compete with datasets containing more contemporary content that is protected by copyright.

Last week, the Commission published the AI Innovation Package to support Artificial Intelligence startups and SMEs. The measures listed in the package include facilitating access to AI-focused supercomputers, which is expected to help expand the use of AI to a wide range of users, including European start-ups and SMEs. An article in Science|Business rightly pointed out that the plan outlined by the Commission suggests that it is pinning its hopes on private companies to keep the EU competitive in AI.

Putting faith in private actors is not sufficient. The way to address the imbalance of power and market concentration must also include investing in the development of systems that serve society and have the best interests of people and the planet at their core. At the moment, it doesn't seem that this approach will be implemented in the field of AI. Whether we can expect any efforts to create a public option for AI in Europe remains to be seen. Some public interests, such as ensuring diversity and transparency in the datasets that train AI models, are simply not always aligned with the interests of corporations who might favor the fastest and cheapest solutions. This is where public authorities, civil society and the large communities of scientists and practitioners working on AI in Europe have a role to play.

Science | Business reported that German Research Minister Bettina Stark-Watzinger believes that "no state or association of states can match the investments made by large corporations like Microsoft or Google with public investments.”

This suggests that the German government throws in the towel and assumes  that private actors are equipped and have the intention to develop digital services that serve the public interest and allow people to enjoy their fundamental rights.This approach is somewhat disappointing, and given the example of private social media platforms that fail to fulfill the role of digital public spaces,, it does not appear to be appropriate. To put it simply, without public funding, European society won't get AI that serves the public.

Last week, Stanford HAI, Stanford CRFM, Princeton CITP, and RegLab released a Policy Brief on Considerations for Governing Open Foundation Models.

The brief outlines the current evidence on the risks of open foundation models (FMs) and offers some recommendations for policymakers on how to think about the risks of open FMs. The brief argues that open FMs - defined by the authors as "models with widely available weights" — "provide significant benefits by combating market concentration, catalyzing innovation, and improving transparency." The authors therefore conclude that "policymakers should explicitly consider the potential unintended consequences of AI regulation on the vibrant innovation ecosystem around open foundation models."

The policy brief also points out that despite widespread concern about the dangers of open foundation models that has dominated policy discussions, "the existing evidence on the marginal risk of open foundation models remains quite limited. The key questions for understanding their impact are the risks posed by open models relative to the risks posed by other models (the marginal risk):

To what extent do open endowment models increase risk relative to (a) closed endowment models or (b) pre-existing technologies such as search engines?

Coming a few days after the final compromise on the EU AI Act, the policy brief provides further support for the AI Act's approach of providing targeted exemptions for open (source) AI developers. The final compromise on the AI Act sidesteps policies - liability for downstream harm, licensing of model developers - that the authors of the policy brief see as particularly problematic for open AI developers. As we argued in our analysis of the Act, the overall approach to open source AI development in the AI Act is quite sound, although there is still room for improvement by getting some of the details of the transparency obligations right.

As part of the CSCW 2023 conference, Alek co-organized a workshop titled “Can Licensing Mitigate the Negative Implications of Commercial Web Scraping?”. Representatives of several research institutions, Hugging Face, Creative Commons, RAIL, and Hippo AI participated in the conversation. You can read the short paper outlining the ideas behind the workshop in the ACM digital library.
Zuzanna and Alek gave a talk on commons-based governance of AI datasets, as part of this year’s Deep Dive on AI webinar series, organized by the Open Source Initiative. The webinars are part of OSI’s initiative to define a new standard for Open Source AI systems, in which we are participating. The talk highlighted the importance of strong standards for data sharing that should be part of a community standard for open source AI. You can watch the video here.
The Mozilla Foundation published a blog post outlining its ideas on Fostering Innovation & Accountability in the EU’s AI Act. In this blogpost they highlight our recent paper on Supporting Open Source and Open Science in the EU AI Act and make two recommendations for EU lawmakers working on finalizing the AI act that echo some of our own recommendations:
  1. The AI Act should allow for proportional obligations in the case of open source projects while creating strong guardrails to ensure they are not exploited to hide from legitimate regulatory scrutiny.
  2. The AI Act should provide clarity on the criteria by which a project will be judged to determine whether it has crossed the “commercialization” threshold, including revenue.
In a third recommendation, Mozilla highlights the importance of definitional clarity when it comes to regulating open source AI systems. Here Mozilla suggests maintaining a strict definition (that would exclude newer licenses like the RAIL family of licenses) and clarifying which components would need to be licensed under an open license for a system to be considered to be an open source AI system. According to Mozilla this should indicatively apply to models, weights and training data.

The European Union's upcoming AI Act will require adequate standards to become fully operational, and much work is required to ensure that the standardization process does not conflict with the Act's inclusion and transparency objectives.

The process will be led by the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC). In the past, they have been criticized for their secrecy and lack of transparency. The standards must be made public, but some fear that the private sector will have too much control over the process, which could have an impact on human rights. The standards' nature and scope will also have geopolitical implications, with some calling for greater international cooperation.

Standards will be essential in enforcing the EU's AI legislation, and CEN-CENELEC will have just two years to formulate and agree on a series of AI standards.

Natali Helberger and Nicholas Diakopoulos have published an article titled "ChatGPT and the AI Act" in the Internet Policy Review. The article argues that the AI Act’s risk-based approach is not suitable for regulating generative AI due to two characteristics of such systems: their scale and broad context of use. These characteristics make it challenging to regulate them based on clear distinctions of risk and no-risk categories.

The article is relevant to us in the context of open source, general-purpose AI systems, and their potential regulation.

Helberger and Diakopoulos propose looking for inspiration in the Digital Services Act (DSA), which lays down obligations on mitigating systemic risks. A similar argument was made by Philipp Hacker, Andreas Engel, and Theresa List in their analysis.

Interestingly, the authors also point out that providers of generative AI models are currently making efforts to define risky or prohibited uses through contractual clauses. While they argue that “a complex system of private ordering could defy the broader purpose of the AI Act to promote legal certainty, foreseeability, and standardisation,” it is worth considering how regulation and private ordering (through RAIL licenses, which we previously analyzed) can contribute to the overall governance of these models.
The Collective Intelligence Project has published a new working paper by Saffron Huang and Divya Siddarth that discusses the impact of Generative Foundation Models (GFMs) on the digital commons. One of the key concerns raised by the authors is that GFMs are largely extractive in their relationship to the Digital Commons:
The dependence of GFMs on digital commons has economic implications: much of  the value comes from the commons, but the profits of the models and their applications may be disproportionately captured by those creating GFMs and associated products, rather than going back into enriching the commons. Some of the trained models have been open-sourced, some are available through paid APIs (such as OpenAI’s GPT-3 and other models), but many are proprietary and commercialized. It is likely that users will capture economic surplus from using GFM products, and some of them will have contributed to the commons, but there is still a question of whether there are obligations to directly compensate either the commons or those who contributed to it.In response, the paper identifies three proposals for dealing with the risks that GFMs pose to the commons.
In response, the paper identifies three proposals for dealing with the risks that GFMs pose to the commons. Read the full paper here:
The launch of BLOOM, an open language model capable of generating text, and the related RAIL open licenses by BigScience, together with the launch of Stable Diffusion, a text-to-image language, shows that a new approach to open licensing is emerging. In Notes on BLOOM, RAIL, and openness of AI, Alek outlines the challenges to established ways of understanding open faced by AI researchers, as they aim to enforce their vision of not just open, but also responsible AI.

keep up to date
and subscribe
to our newsletter
Subscribe