The Centre for European Policy Studies publishes a paper by Alex Engler (the author of the Brookings institute blog post referenced below) and Andrea Renda on reconciling the AI value chain with the AI Act. In this paper they argue that GPAI model providers should be regulated on the basis of “soft commitments” in the form of “a voluntary code of conduct for GPAI models” and that open-source AI models should be exempted from all AI Act requirements:
The AI Act should explicitly exempt the placing of an AI system online as free and open-source software (i.e. making the entire model object available for download under an open-source licence, not just available without cost via API access). The deployment and use of these AI systems for any covered non-personal purposes would still be regulated under the AI Act, thus maintaining the same level of consumer protection and safeguards for human rights and safety. However, this exemption would enable the collective development, improvement and public evaluation of AI systems, which has become a key outcome of open-source software. Including open-source AI systems under the AI Act requirements will likely result in a barrier to both scientific and research advancement, as well as reducing public understanding and rigorous scrutiny of commonly used methods and models.