Concerns Over the Impact of the AI Act on Open-Source R&D: LAION’s open letter

Opinion
May 2, 2023

Following the news that the European Parliament reached an agreement concerning its position in the negotiations of the AI Act, LAION (The Large Scale Artificial Intelligence Open Network) published an open letter calling the Parliament to “consider the impact of the draft AI Act on open-source research and development.”

We have previously explained that neither the Commission’s initial proposal for the AI Act nor the Council’s general approach regarded open-source AI systems as a distinct category deserving of different treatment than closed AI. As a result, policymakers have ignored concerns about the AI Act’s chilling effect on open-source AI development. Afterward, the amendments introduced in the JURI opinion mentioned open-source AI. Still, they failed to convincingly balance the competing objectives of limiting chilling effects on open-source AI development while maintaining adequate regulatory oversight for high-risk uses of such systems.

At the moment, we are awaiting the official position of the European Parliament. Based on the available information, it will most likely exempt open-source AI components from the scope of the Act unless they are placed on the market or put into service by a provider as part of (1) a high-risk AI system or (2) an AI system that falls under Title II (prohibited AI) or IV, which includes systems that generate content, among other things. The exemption, however, will not apply to foundation models, understood as models trained on comprehensive data at scale and designed for the generality of outputs, which can be adapted to a wide range of specific tasks.

LAION is one of the critical organizations promoting a public-interest-driven open-source approach to developing AI systems. We have previously expressed our support for their petition, which urges the European Union and several other states to create a publicly funded and democratically governed research facility capable of constructing large-scale artificial intelligence models.

The signatories of the open letter are concerned about the impact that strict requirements for foundation models will have on open source development of AI. The letter points out that (1) open source is essential for safety, competition, and security in AI and, therefore, it’s worth protecting from overregulation; (2) “one size fits all” rules will stifle open-source R&D. If all foundation models are de facto treated as high-risk, it will make research and development of open-source foundation models in Europe difficult or impossible, and finally (3) Europe cannot afford to lose AI sovereignty, because “inhibiting open-source R&D will leave the European scientific community and economy critically dependent on a handful of foreign and proprietary firms for essential AI infrastructure.”

In light of these concerns, the signatories formulate three recommendations. They urge the EP to:

  1. Ensure that open-source R&D can reasonably comply with the AI Act. They propose that “where appropriate, the Act should exempt open-source models from regulations intended for closed-source models offered as a service.”
  2. Impose requirements proportional to risk. The signatories argue that not all foundation models are the same; not all are high-risk.
  3. Establish public research facilities to provide computing resources.

Without a doubt, regulators should use the inherent transparency advantage of open source to establish rules that ensure AI development is done ethically and responsibly. To that end, developing open-source AI systems should be protected and encouraged through regulation. However, such laws must be carefully designed to prevent malicious actors from exploiting potential regulatory loopholes.

Establishing a regulatory framework that achieves the dual objectives of protecting open-source AI systems and mitigating risks of potential harm is thus a critical imperative for the European Union.

Especially since open-source, publicly supported AI systems are crucial digital public infrastructures that would ensure Europe’s sovereignty.

The freedom of research is a fundamental right recognized in the EU Charter and must be protected. However, foundation models come with inherent risks that cannot be fully addressed at the application level. Instead, they must be tackled during the research and development phase. Therefore, AI development should occur with appropriate safeguards and guardrails.

Open-source AI development methods have unique features that should be considered when designing these safeguards. These features include transparency, as the elements of the systems are openly available for inspection, and the participatory nature of the development process, which reduces the dominance of a few well-resourced AI providers.

As a result, when developing standards to balance the protection of open-source AI and mitigating potential harms, these existing features should be used as a starting point. While addressing any potential risks, it is critical to establish a regulatory framework that builds on the inherent qualities of open-source AI research and development.

The specifics must be agreed upon in a dialogue between the research and development communities, regulators, and those impacted by the implementation and use of AI systems rather than behind closed doors.

The AI Act should consider these considerations and enact rules to facilitate and not hinder such a collaborative process.

Zuzanna Warso
keep up to date
and subscribe
to our newsletter
Subscribe