The EU should not trust AI companies to self-regulate

Opinion
November 28, 2023

The question of how the AI Act should regulate AI systems that can serve multiple functions has been a rollercoaster ride since the idea first surfaced in the Council discussions in 2022. Now, as the trilogue negotiations enter a crucial phase — with both the end of the year and the end of the current legislature in sight — the question of how to deal with such systems remains contentious. In the latest swing of the pendulum, a number of powerful member states have positioned themselves against any form of mandatory regulation of foundation models.

In this opinion, we argue that failing to regulate the foundational building blocks of the AI ecosystem in a mandatory way would be a gross mistake and that just as over-regulation of the AI ecosystem, the lack of regulation would neither benefit open-source approaches to building AI nor serve society as a whole.

Regulating General Purpose AI and Foundation Models.

The Commission’s proposal for the AI Act focused on specific applications of AI, with the goal of regulating those applications that pose an increased risk. This risk-based approach was challenged by the emergence of powerful general-purpose AI (GPAI) systems. In late 2022, the Council adopted a position for the negotiations that included provisions that also applied to GPAI systems, and in July 2023, the European Parliament adopted its position that included a set of even more far-reaching obligations for so-called foundation models. In parallel with this expansion of scope, concerns about how such requirements would impact the development of open-source AI systems grew, with some stakeholders from the open-source AI ecosystem arguing that open-source AI systems and components should be excluded from any obligations imposed on GPAI systems and/or foundation models. While both the EP and Council positions ultimately included some accommodations for open-source AI developers, the Parliament’s report made no concessions regarding the requirements it would impose on foundation models.

A more proportionate approach

These potential obligations for foundation models would be impossible to fulfill for independent open-source AI developers, who often lack the necessary resources or organizational arrangements. Therefore, we have been working with others in the open AI ecosystem to develop an approach that would balance the need to regulate foundation models with a regulatory mechanism that takes into account the specificity of open-source AI development. In July, we published — together with Creative Commons, EleutherAI, LAION, Huggingface, and GitHuba position paper on Supporting Open Source and Open Science in the EU AI Act. In this paper, we introduced the idea of proportionate requirements for foundation models.

The paper highlighted that open-source AI development is well aligned with a number of requirements introduced in the EP report: transparency and technical documentation are vital characteristics of well-managed open-source projects. Based on this, the paper suggested that there should be a distinction between basic obligations that apply to all foundation models and another set of obligations that would only apply to foundation models deployed at scale and operated by (commercial) entities with more resources. Under these conditions, including foundation models in the scope of the AI Act would not result in a structural disadvantage for open-source AI developers.

By mid-November, key elements of the approach we had proposed in July had found their way into a compromise proposal from the Spanish Presidency, which included a so-called “tiered approach,” applying basic (transparency and documentation) obligations to all foundation models and reserving some of the more far-reaching obligations for GPAI systems/foundation models used at scale.

Unfortunately, this approach is now challenged by several powerful member states, including France, Germany, and Italy, who have indicated that they are unwilling to support any binding regulation of foundation models and have suggested self-regulation based on “mandatory self-regulation through codes of conduct.”

There is no such thing as self-regulation

Leaving aside the simple fact that “mandatory self-regulation” is an oxymoron — it is either mandatory or self-regulation — it is clear that relying on the AI industry to self-regulate would be a mistake. In a recent paper co-authored by a (former) board member of OpenAI, the authors refer to “private sector investment in more interpretable AI models and incentives for information sharing” as “reducible costs.” Such statements clearly show that even the most well-funded companies are likely to avoid the costs associated with improving technology transparency unless they are legally required to do so. It’s a recurring lesson for tech regulators, and it’s critical that they finally get it.

If we have learned anything from the ex-post regulation of social networking services over the past decade that led to the adoption of the DSA, it is that we cannot afford another dominant technological paradigm that is shaped out of society’s sight. Researchers, regulators, and civil society need to be able to assess the impact of AI, and mandatory transparency and documentation are the only truly proven mechanisms to create the conditions for this.

This means the AI Act must include mandatory documentation and transparency requirements for all foundational models, shedding light on how models were trained and what data was used. The information should include data on resource consumption, which is essential for shaping policies that will guarantee the development and deployment of this new wave of technology in the most sustainable way possible rather than exacerbating the already unsustainable resource use of the tech industry.

In this situation, it is important that the European Parliament and the Commission stick to their guns and do not give in to the self-serving rhetoric of a few member states who are acting on the misguided idea that weakening much-needed regulation will somehow give them a leg up in the global race to capture the AI space.

Making transparency and documentation obligations voluntary is not an “innovation-friendly approach based on European values,” it is sacrificing the public interest on the altar of misconstrued notions of technological progress.

To use the words of Commissioner Breton, it is clear that when it comes to regulating foundation models, “Big Tech and AI startup Mistral do not represent the public interest.” This insight is echoed in a Parliament discussion paper from late last week, which argues that an approach “based entirely on self-regulation cannot produce safe AI systems” and insists on establishing a basic set of technical transparency requirements for all GPAI models while reserving an additional set of binding obligations “on transparency and documentation, internal model assessment and testing (including red-teaming), cybersecurity, and compliance with standards to reduce the environmental impact of models” for those models that pose systemic risks.

To leave room for innovation, the Parliament’s negotiators also propose to exclude basic research, development, and pre-commercial prototyping activities from the scope of the regulation. As we have argued in our position paper, this is a far better solution than sacrificing the much-needed transparency requirements for all foundation models in the name of innovation.

Paul Keller
Zuzanna Warso
keep up to date
and subscribe
to our newsletter
Subscribe