This observatory documents the discussion on how to deal with general purpose AI systems in the European Union’s Artificial Intelligence Act, with a particular focus on the implications for open source AI development. If you are looking for coverage of the impact of the AI Act on the protection of fundamental rights, please visit this page maintained by EDRi.
On 1 April 2021, the European Commission published its proposal for a Regulation laying down harmonized rules on artificial intelligence (also known as the Artificial Intelligence Act). The AI Act is one of the flagship digital legislative initiatives of the first von der Leyen Commission. The proposal Was widely seen as the first attempt at a comprehensive regulatory approach to the challenges and opportunities posed by rapid technological development in this field.
Our Observatory documents the legislative development of the proposal throughout the legislative process, which was concluded in early 2024. It is limited to provisions related to general purpose AI systems, which began to emerge in late 2021 and became a focal point of discussion in 2022 alongside the public availability of a new generation of generative AI models.
According to a compromise document seen by Euractiv, the tiered approach was maintained with an automatic categorisation as ‘systemic’ for models that were trained with computing power above 1025 floating point operations.
A new annexe will provide criteria for the AI Office to make qualitative designation decisions ex officio or based on a qualified alert from the scientific panel. Criteria include the number of business users and the model’s parameters, and can be updated based on technological developments.
Transparency obligations will apply to all models, including reporting on energy consumption and publishing a sufficiently detailed summary of the training data “without prejudice of trade secrets”. AI-generated content will have to be immediately recognisable.
Importantly, the AI Act will not apply to free and open source models whose parameters are made publicly available, except for what concerns implementing a policy to comply with copyright law, publishing the detailed summary, obligations for systemic models, and the responsibilities along the AI value chain.
Moreover, when discussing foundation models and general purpose AI during the trilogue, the Commission verbally proposed working on "a two-tier compromise solution". The idea would be to apply a series of best practices (such as documentation, source verification, testing, etc.) to foundation models. And to consider more obligations - such as evaluation or third-party audits ("red-teaming") - for foundation models and general-purpose AI systems with significant impact, a source at the Commission tells us. "The threshold for making this distinction is not yet clear", but "size could be a criterion", says another source. This option will be presented to member states on October 6.
GPAI must be regulated throughout the product cycle, not just at the application layer, in order to account for the range of stakeholders involved. The original development stage is crucial, and the companies developing these models must be accountable for the data they use and design choices they make. Without regulation at the development layer, the current structure of the AI supply chain effectively enables actors developing these models to profit from a distant downstream application while evading any corresponding responsibility.
As currently drafted, open source AI systems would be exempt unless they fall into one of the following categories: (1) all commercially deployed open source AI systems; (2) all open source AI systems placed into service that are high-risk, banned, or have transparency obligations under Title IV; and (3) all open source general purpose AI systems. While much of this scope is well-reasoned and reflects the risk-based framework upon which the AI Act was introduced, the final clause (3), “This exemption should not apply to fundamental general purpose Al models as described in Art 28b” should be struck.
[...] Risks associated with general purpose AI warrant careful regulatory scrutiny. The open source exemption was previously appropriately scoped to enable this scrutiny, with all commercially deployed and high risk systems facing relevant requirements, while enabling non-commercial development. Open source research, development, and deployment builds capacity for regulatory scrutiny—independent of the companies building and deploying these systems—and supports AI innovation in line with European values.
... the AI Act should not actively discourage the release of open source GPAI. Instead it should take a proportionate approach that considers both the special nature of the open source ecosystem as well as the fact that open source GPAI is released with more information than its proprietary equivalent, along with, for example, better means to validate provided information and test the capabilities of GPAI models. GPAI released open source and not as a commercial service should therefore be excluded from the scope of the approach outlined above if the information necessary for compliance is made available to downstream actors. This could contribute to fostering a vibrant open source AI ecosystem, more downstream innovation, and important safety and security research on GPAI.
In this context, it is crucial that the responsibility to comply with the obligations of the AI Act be shared between the providers (developers) and the users (deployers) according to their level of control, resources and capabilities. There are only a handful of providers of GPAIS who are all very well-resourced with huge computational capabilities and who employ the world's best AI researchers. A single GPAIS can be used as the foundation for several hundred applied models (e.g. chatbots, ad generation, decision assistants, spambots, translation, etc.) and any failure present in the foundation will be present in the downstream uses.
Requiring all general purpose AI providers to comply with the risk-management obligations of the AI Act would be very burdensome, technically difficult and in some cases impossible. General purpose AI suppliers may have limited visibility on the subsequent use of their general purpose AI system, the context in which the system is deployed and other information necessary to ensure compliance with the iterative risk management obligations required for High-Risk AI systems under the EU AI Act.
The AI Act should explicitly exempt the placing of an AI system online as free and open-source software (i.e. making the entire model object available for download under an open-source licence, not just available without cost via API access). The deployment and use of these AI systems for any covered non-personal purposes would still be regulated under the AI Act, thus maintaining the same level of consumer protection and safeguards for human rights and safety. However, this exemption would enable the collective development, improvement and public evaluation of AI systems, which has become a key outcome of open-source software. Including open-source AI systems under the AI Act requirements will likely result in a barrier to both scientific and research advancement, as well as reducing public understanding and rigorous scrutiny of commonly used methods and models.
Severely impact open-source development in Europe. The French Presidency’s proposal would require open-source developers of General Purpose AI and tools to comply with the AI Act at all phases of development, regardless of market placement and risk definition. In addition, the entities and individuals responsible for compliance would include all those involved in developing code that may eventually lead to a General Purpose AI or tool. This would severely impact and disincentivize the development of open-source software and AI in Europe.
This regulation shall not apply to Open Source AI systems until those systems are put into service or made available on the market in return for payment, regardless of if that payment is for the AI system itself, the provision of the AI system as a service, or the provision of technical support for the AI system as a service.
In the end, the Council’s attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of GPAI. Open-source AI models deliver tremendous societal value by challenging the domination of GPAI by large technology companies and enabling public knowledge about the function of AI. The European Council’s former approach — exempting open-source AI until it is used for a high-risk application — would lead to far better outcomes for the future of AI.
… Developers of general purpose AI systems should be treated as providers in this legislation, while companies using these systems for specific applications should be treated as exactly that: users. The recent report from the two leading committees of the European Parliament also fails to clarify this. We recommend that the European Union explicitly assign responsibility to the developers of general purpose AI systems.
In particular, it is necessary to clarify that general purpose AI systems are AI systems that are intended by the provider to perform generally applicable functions, such as image/speech recognition, and in a plurality of contexts. Therefore, due to their peculiar nature and in order to ensure a fair sharing of responsibilities along the AI value chain, such systems should be subject to proportionate and tailored requirements and obligations under this Regulation before their placing on the Union market or putting into service. Therefore, the providers of general purpose AI systems, irrespective of whether they may be used as high-risk AI systems as such by other providers or as components of high-risk AI systems, should cooperate, as appropriate, with final providers to enable their compliance with the relevant obligations under this Regulation and with the competent authorities established under this Regulation.
… General purpose AI systems are software, which means they can very quickly be applied to a wide range of areas — much faster than the EU can adopt new acts. Therefore, the solution is to cover them in this regulation by default, and to ensure the responsibility for their safety is not just on EU companies, but shared with the creators of general purpose AI systems.
We are pleased to note in the draft Council position (Article 52a) that it is clarified that any person (a deployer, in effect) who ‘puts into service or uses’ a general-purpose AI system for an intended high-risk purpose comes under duties to certify confirmation with the essential requirements of Chapter III and does not seem to need a ‘substantial modification’. This is clearly aimed at catching the downstream adapter or deployer. But in so doing, the text seems to have done nothing to meet the problem that the user/deployer almost certainly lacks mandatory access to training-set or testing data, or ability to compel changes in the upstream service (unless these are built in as rights into a contract which is highly unlikely, especially when there are chains of providers as in example 1). At the same time, the Council proposal removes liability for the upstream provider of the general-purpose AI (Article 52a (1)). This exculpates the large tech suppliers like Amazon, Google and Microsoft, whose involvement in certification of AI as safe is, as discussed above, vital, since they have effective control over the technical infrastructure, training data and models, as well as the resources and power to modify and test them.
In particular, it is necessary to clarify that general purpose AI systems — understood as AI system that are able to perform generally applicable functions such as image/speech recognition, audio/video generation, pattern detection, question answering, translation etc. — should not be considered as having an intended purpose within the meaning of this Regulation. Therefore the placing on the market, putting into service or use of a general purpose AI system, irrespective of whether it is licensed as open source software or otherwise, should not, as such, trigger any of the requirements or obligations of this Regulation.
Unlike most AI systems which are designed for one use-case, the API today provides a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task.