AI Act and Open Source

Status: Published in OJ
Type: Regulation

This observatory documents the discussion on how to deal with general purpose AI systems in the European Union’s Artificial Intelligence Act, with a particular focus on the implications for open source AI development. If you are looking for coverage of the impact of the AI Act on the protection of fundamental rights, please visit this page maintained by EDRi.

On 1 April 2021, the European Commission published its proposal for a Regulation laying down harmonized rules on artificial intelligence (also known as the Artificial Intelligence Act). The AI Act is one of the flagship digital legislative initiatives of the first von der Leyen Commission. The proposal Was widely seen as the first attempt at a comprehensive regulatory approach to the challenges and opportunities posed by rapid technological development in this field.

Our Observatory documents the legislative development of the proposal throughout the legislative process, which was concluded in early 2024. It is limited to provisions related to general purpose AI systems, which began to emerge in late 2021 and became a focal point of discussion in 2022 alongside the public availability of a new generation of generative AI models.

Timeline

Council approves the AI Act
The Council of Ministers unanimously adopted the final text of the AI Act that was agreed upon during the trilogue negotiations in December of 2023.
European Parliament approves final AI Act text
The European Parliament adopted by 523 votes to 46, with 49 abstentions the final text of the AI Act that was agreed upon during the trilogue negotiations in December of 2023.
IMCO and LIBE approve the compromise text
The European Parliament's IMCO and LIBE Committees endorse the AI Act compromise text with 71 votes in favor, 8 votes against and 7 abstentions.
As expected, ambassadors of the EU member states unanimously approved the final compromise text of the AI Act. In the end, France backed down on the two remaining GPAI related sticking points: The © transparency provision (which France considered too broad) and the threshold for GPAI models with systemic risk (which France considered too low).
Euractiv reports that during the first part of the final trilogue negotiations, the co-legislators have found a provisional agreement on the rules for GPAI models that largely excludes Open Source models from the obligations:
According to a compromise document seen by Euractiv, the tiered approach was maintained with an automatic categorisation as ‘systemic’ for models that were trained with computing power above 1025 floating point operations.

A new annexe will provide criteria for the AI Office to make qualitative designation decisions ex officio or based on a qualified alert from the scientific panel. Criteria include the number of business users and the model’s parameters, and can be updated based on technological developments.

Transparency obligations will apply to all models, including reporting on energy consumption and publishing a sufficiently detailed summary of the training data “without prejudice of trade secrets”. AI-generated content will have to be immediately recognisable.

Importantly, the AI Act will not apply to free and open source models whose parameters are made publicly available, except for what concerns implementing a policy to comply with copyright law, publishing the detailed summary, obligations for systemic models, and the responsibilities along the AI value chain.
Council approves final Data Act
The Council of Ministers adopted the final text of the Data Act (DA) that was agreed upon during the trilogue negotiations.
According to an AWO Agency newsletter quoting from Contexte during the trilogue meetings on 2 & 3 October, the Commission proposed a way forwards for foundation models that resemble the approach we had suggested in our July Policy Paper on Supporting Open Source and Open Science in the EU AI Act:
Moreover, when discussing foundation models and general purpose AI during the trilogue, the Commission verbally proposed working on "a two-tier compromise solution". The idea would be to apply a series of best practices (such as documentation, source verification, testing, etc.) to foundation models. And to consider more obligations - such as evaluation or third-party audits ("red-teaming") - for foundation models and general-purpose AI systems with significant impact, a source at the Commission tells us. "The threshold for making this distinction is not yet clear", but "size could be a criterion", says another source. This option will be presented to member states on October 6.
European Parliament report adopted in plenary
The plenary of the European Parliament adopts the joint IMCO LIBE report on the AI act in first reading, clearing the way for trilogue negotiations.
Joint IMCO-LIBE report adopted in committee
The IMCO and LIBE Committees of the European Parliament adopt their report containing 771 amendments. The report includes an exemption for open source AI components, but this exemption does not apply to so-called "foundation models". This means that under the Parliament's report, open source GPAI models would have to comply with the same obligations as all other GPAI models.
The AI Now Institute publishes a Policy Brief signed by a large number of AI researchers arguing that GPAI carry serious risks and must not be exempt under the forthcoming EU AI Act. The policy brief does not contain any seperate considerations related to oOpen sSource AI and stresses that the AI act should include meaningful requirements throughout the entire GPAI product life cycle:
GPAI must be regulated throughout the product cycle, not just at the application layer, in order to account for the range of stakeholders involved. The original development stage is crucial, and the companies developing these models must be accountable for the data they use and design choices they make. Without regulation at the development layer, the current structure of the AI supply chain effectively enables actors developing these models to profit from a distant downstream application while evading any corresponding responsibility.
Open Future, Creative Commons, Wikimedia, Open Forum Europe, Eleuther AI, HuggingFace, GitHub and LAION send a letter to the European Parliament's AI Act rapporteurs asking for clarification that the open source exception currently under discussion applies to GPAI systems unless they are used commercially or in high-risk contexts:
As currently drafted, open source AI systems would be exempt unless they fall into one of the following categories: (1) all commercially deployed open source AI systems; (2) all open source AI systems placed into service that are high-risk, banned, or have transparency obligations under Title IV; and (3) all open source general purpose AI systems. While much of this scope is well-reasoned and reflects the risk-based framework upon which the AI Act was introduced, the final clause (3), “This exemption should not apply to fundamental general purpose Al models as described in Art 28b” should be struck.

[...] Risks associated with general purpose AI warrant careful regulatory scrutiny. The open source exemption was previously appropriately scoped to enable this scrutiny, with all commercially deployed and high risk systems facing relevant requirements, while enabling non-commercial development. Open source research, development, and deployment builds capacity for regulatory scrutiny—independent of the companies building and deploying these systems—and supports AI innovation in line with European values.
Open AI announces the release of its ChatGPT chatbot, bringing mainstream attention to the new generation of generative AI applications. This super-charges the discussion about GPAI models that has so far been relatively marginal in the overall discussions on the AI Act.
The Council adopts common position on the AI Act
The EU Member States adopt their common position on the AI Act. The provisions related to GPAI systems remain unchanged from the Czech compromise proposal from late September: obligations on the providers of General purpose AI systems would be defined through implementing acts drawn up by the European Commission.
The Mozilla Foundation publishes a policy brief on how the EU can take on “general-purpose AI" in the AI Act that argues for "accounting for the special nature of open source and ensuring that the AI Act contributes to building a vibrant open source AI ecosystem and enables important research into GPAI." To achieve this the authors suggest that...
... the AI Act should not actively discourage the release of open source GPAI. Instead it should take a proportionate approach that considers both the special nature of the open source ecosystem as well as the fact that open source GPAI is released with more information than its proprietary equivalent, along with, for example, better means to validate provided information and test the capabilities of GPAI models. GPAI released open source and not as a commercial service should therefore be excluded from the scope of the approach outlined above if the information necessary for compliance is made available to downstream actors. This could contribute to fostering a vibrant open source AI ecosystem, more downstream innovation, and important safety and security research on GPAI.
The Future of Life Institute published an open letter on general purpose AI systems in the AI Act. In this letter 10 civil society organizations (including EDRi, Access Now and Bits of Freedom) argue for ex-ante obligations in the AI Act on the providers of general purpose AI systems:
In this context, it is crucial that the responsibility to comply with the obligations of the AI Act be shared between the providers (developers) and the users (deployers) according to their level of control, resources and capabilities. There are only a handful of providers of GPAIS who are all very well-resourced with huge computational capabilities and who employ the world's best AI researchers. A single GPAIS can be used as the foundation for several hundred applied models (e.g. chatbots, ad generation, decision assistants, spambots, translation, etc.) and any failure present in the foundation will be present in the downstream uses.
In a non-paper regarding the September 2022 Revisions to the Draft EU Artificial Intelligence Act Proposed by the Czech Presidency, the US administration warns against placing risk-management obligations on the providers of GPAI systems:
Requiring all general purpose AI providers to comply with the risk-management obligations of the AI Act would be very burdensome, technically difficult and in some cases impossible. General purpose AI suppliers may have limited visibility on the subsequent use of their general purpose AI system, the context in which the system is deployed and other information necessary to ensure compliance with the iterative risk management obligations required for High-Risk AI systems under the EU AI Act.
The Centre for European Policy Studies publishes a paper by Alex Engler (the author of the Brookings institute blog post referenced below) and Andrea Renda on reconciling the AI value chain with the AI Act. In this paper they argue that GPAI model providers should be regulated on the basis of "soft commitments" in the form of "a voluntary code of conduct for GPAI models" and that open-source AI models should be exempted from all AI Act requirements:
The AI Act should explicitly exempt the placing of an AI system online as free and open-source software (i.e. making the entire model object available for download under an open-source licence, not just available without cost via API access). The deployment and use of these AI systems for any covered non-personal purposes would still be regulated under the AI Act, thus maintaining the same level of consumer protection and safeguards for human rights and safety. However, this exemption would enable the collective development, improvement and public evaluation of AI systems, which has become a key outcome of open-source software. Including open-source AI systems under the AI Act requirements will likely result in a barrier to both scientific and research advancement, as well as reducing public understanding and rigorous scrutiny of commonly used methods and models.
In an open letter published by the Business Software Alliance, ten European software industry associations call on the EU to exclude General Purpose AI from the scope of the AI Act, describing plans to include it as a “fundamental departure from its original objective” and saying that it could stifle innovation and hit the open source community. On the last point they argue that including GPAI in the scope of the act would
Severely impact open-source development in Europe. The French Presidency’s proposal would require open-source developers of General Purpose AI and tools to comply with the AI Act at all phases of development, regardless of market placement and risk definition. In addition, the entities and individuals responsible for compliance would include all those involved in developing code that may eventually lead to a General Purpose AI or tool. This would severely impact and disincentivize the development of open-source software and AI in Europe.
The Czech presidency of the Council shares a new compromise proposal. This version maintains the inclusion of GPAI systems in the scope of the regulation, but instead of direct application of selected requirements, the Commission would now be obliged to adopt implementing acts specifying how these requirements should be applied. These implementing acts should be "based on a detailed impact assessment and taking into account in particular technical feasibility and market and technological developments."
The opinion by the European Parliament's legal affairs committee includes amendments to include General Purpose AI systems in the scope of the regulation and imposes a limited set of obligations on original providers of GPAI systems. It also includes a specific carve out for open source AI systems until their commercialization:
This regulation shall not apply to Open Source AI systems until those systems are put into service or made available on the market in return for payment, regardless of if that payment is for the AI system itself, the provision of the AI system as a service, or the provision of technical support for the AI system as a service.
The Brookings Institute publishes a blogpost by Alex Engler arguing that by including GPAI in the scope of the AI Act, the EU "would take the unusual, and harmful, step of regulating open-source GPAI." Engler warns that..
In the end, the Council’s attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of GPAI. Open-source AI models deliver tremendous societal value by challenging the domination of GPAI by large technology companies and enabling public knowledge about the function of AI. The European Council’s former approach — exempting open-source AI until it is used for a high-risk application — would lead to far better outcomes for the future of AI.
Stability.ai releases its Stable Diffusion text to image model. Both the code and the model weights have been released under the CreativeML Open RAIL-M license. This allows the model to be run on most recent consumer hardware.
The Czech presidency publishes its compromise proposal on the AI Act. The proposal maintains the French approach to GPAI systems.
Euractiv published an op-ed by Kris Shrishak and Risto Uuk (Future of Life Institute) on the need to include GPAI in the scope of the AI Act. They argue that..
… Developers of general purpose AI systems should be treated as providers in this legislation, while companies using these systems for specific applications should be treated as exactly that: users. The recent report from the two leading committees of the European Parliament also fails to clarify this. We recommend that the European Union explicitly assign responsibility to the developers of general purpose AI systems.
The French presidency of the Council shares a compromise text that proposes to delete the Article 52a-language introduced by the Slovenian presidency and instead adds a set of new articles (4a-c) that bring GPAI systems "which may be used as high risk AI systems" in scope and imposes a subset of the value chain obligations on them. From the recital accompanying the proposed new articles:
In particular, it is necessary to clarify that general purpose AI systems are AI systems that are intended by the provider to perform generally applicable functions, such as image/speech recognition, and in a plurality of contexts. Therefore, due to their peculiar nature and in order to ensure a fair sharing of responsibilities along the AI value chain, such systems should be subject to proportionate and tailored requirements and obligations under this Regulation before their placing on the Union market or putting into service. Therefore, the providers of general purpose AI systems, irrespective of whether they may be used as high-risk AI systems as such by other providers or as components of high-risk AI systems, should cooperate, as appropriate, with final providers to enable their compliance with the relevant obligations under this Regulation and with the competent authorities established under this Regulation.
The Future of Life Institute publishes a position paper on GPAI arguing for their inclusion in the Regulation. The paper argues that...
… General purpose AI systems are software, which means they can very quickly be applied to a wide range of areas — much faster than the EU can adopt new acts. Therefore, the solution is to cover them in this regulation by default, and to ensure the responsibility for their safety is not just on EU companies, but shared with the creators of general purpose AI systems.
IMCO-LIBE draft report
The joint draft report of the IMCO-LIBE committees of the European Parliament is released. The report authored by the co-rapporteurs Brando Benifei (IMCO) and Dragos Turodache (LIBE) does not contain any amendments dealing with General Purpose AI systems of models.
The UK-based Ada Lovelace Institute publishes an expert opinion on the AI Act authored by Lilian Edwards that — among other issues — criticizes the approach to GPAI models included in the Slovenian Presidency proposal:
We are pleased to note in the draft Council position (Article 52a) that it is clarified that any person (a deployer, in effect) who ‘puts into service or uses’ a general-purpose AI system for an intended high-risk purpose comes under duties to certify confirmation with the essential requirements of Chapter III and does not seem to need a ‘substantial modification’. This is clearly aimed at catching the downstream adapter or deployer. But in so doing, the text seems to have done nothing to meet the problem that the user/deployer almost certainly lacks mandatory access to training-set or testing data, or ability to compel changes in the upstream service (unless these are built in as rights into a contract which is highly unlikely, especially when there are chains of providers as in example 1). At the same time, the Council proposal removes liability for the upstream provider of the general-purpose AI (Article 52a (1)). This exculpates the large tech suppliers like Amazon, Google and Microsoft, whose involvement in certification of AI as safe is, as discussed above, vital, since they have effective control over the technical infrastructure, training data and models, as well as the resources and power to modify and test them.
The Slovenian presidency shares a compromise text on the AI Act. A new article 52a clarifies that general-purpose AI systems should not fall in the scope of the proposal unless they are used in combination with high-risk applications. From the recital accompanying the proposed new article:
In particular, it is necessary to clarify that general purpose AI systems — understood as AI system that are able to perform generally applicable functions such as image/speech recognition, audio/video generation, pattern detection, question answering, translation etc. — should not be considered as having an intended purpose within the meaning of this Regulation. Therefore the placing on the market, putting into service or use of a general purpose AI system, irrespective of whether it is licensed as open source software or otherwise, should not, as such, trigger any of the requirements or obligations of this Regulation.
Commission proposal for an Artificial Intelligence Act
The Commission publishes its proposal for a Regulation laying down harmonized rules on artificial intelligence and amending certain union legislative acts: The AI Act
In the announcement blogpost Open AI explains that:
Unlike most AI systems which are designed for one use-case, the API today provides a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task.
The Commission published a white paper on artificial intelligence, sketching out the European approach for the next five years. The white paper does not mention general purpose or generative AI models.

keep up to date
and subscribe
to our newsletter
Subscribe