Code is speech, and speech is free

An argument in favor of open-sourcing AI
Opinion
October 12, 2023

Over the past few months, experts and practitioners have put forth different arguments demonstrating the value and benefits of open-source AI systems (OS AI). We recently published a position paper on the issue with five other organizations, prompted by the European Parliament’s report on the AI Act. We pointed out that principles of open source development, such as commitment to transparency and documentation, align well with the values that drive efforts to mitigate the adverse effects of AI deployment. Recognizing this, we called for a tiered approach and proportionate requirements for foundational models tailored to fit particular risks.

While the process of drafting a precise definition of OS AI continues, the calls to encourage open AI development have, in the meantime, received pushback. A distinct set of arguments that are being raised is related to the potential for misuse of highly capable OS models. These arguments ultimately boil down to the claim that open-sourcing AI models increase the risk of malicious use by making them more accessible and modifiable.

In a recent paper from the Centre for the Governance of AI, Elisabeth Seger and others argue that for some highly capable foundation models that they believe are likely to be developed in the near future, open-sourcing may entail extreme risks that outweigh the benefits. According to the authors, these foundation models should not be open-sourced. Based on this, they offer a set of recommendations, including one that governments should enact safety measures through means such as “liability law and regulation, licensing requirements, fines, or penalties.”

As calls for restrictions on the distribution pathways for AI models gain traction, including such calls for regulatory intervention as the one above, legislators and all other stakeholders involved in these discussions must clearly understand all values at stake.

AI models encode patterns and knowledge from the data they were trained on. In that sense, they encapsulate and communicate ideas. Limiting the possibility of sharing these ideas interferes with the freedoms of researchers and developers. While these liberties are not absolute, they provide a significant counterargument to calls for legal restrictions to openly distributing models.

The Freedom of Expression and Open-Sourcing AI Models

AI models consist of various components, including model architecture, model weights, inference code, and training code. Open-sourcing AI models refer to the practice of making these components freely accessible to the public, allowing anyone to view, use, modify, and distribute them. By freely sharing the components, different actors can collaborate on improving them (for example, through fine-tuning) and building a better understanding of their limitations.

We have previously explained that broad access to functional AI systems and open, general-purpose models is necessary and valuable for open and accountable development. Apart from these arguments in favor of open-source AI systems, it is also important to remember that computer code (which forms an integral part of AI models) is a form of speech in and of itself. Recognizing this is critical in light of calls for regulators to step in and impose restrictions on how researchers and developers distribute AI models.

The classification of computer code as a form of expression confers to it a distinct status that protects it under the provisions governing free expression. In the United States, computer code is protected by the First Amendment to the United States Constitution. In Europe, freedom to disseminate research results is safeguarded under Article 10 of the European Convention on Human Rights, which upholds the fundamental right to free expression. These legal safeguards validate the code’s expressive capacity and emphasize the importance of preserving individuals’ abilities to communicate, innovate, and share their ideas via this digital medium.

With this in mind, there’s a compelling argument that we should consider the different components of an AI system as information and, as such, safeguard them under laws that protect the freedom of expression. This perspective aligns with the fact that courts have recognized computer code as a form of speech deserving of legal protection.

Hence, the calls to limit the possibility of open-sourcing AI models must consider that such restrictions would interfere with freedom of expression (or freedom of speech in the US).

There are legitimate situations in which limitations can be imposed. However, these constraints must adhere to the principle of proportionality. In other words, any restriction on free expression must be proportionate to the intended goal in order to be considered lawful.

Metaphors Matter

The discussion about proportionality boils down to whether the benefits of open-sourcing AI outweigh the potential risks. There is some evidence that sharing AI models openly carries risks. For example, the authors of the quoted paper point out that Stable Diffussion’s safety filter can be removed by deleting a single line of inference code. That would not be possible if the Stable Diffusion model was not open source. But for the most part, the misuse of generative or general-purpose AI does not depend on whether the tool is open source or not. People have generated content they should not be able to by using closed models, e.g., OpenAI’s DALL-E.

Concerning the “existential risks,” the fears surrounding AI are primarily fueled by speculation and, thus, at the moment, exist in the realm of the imaginary. Along with this, because of its novelty, the field of AI is heavily influenced by metaphorical language.

In all spheres of life, metaphors serve as cognitive bridges. They allow people to understand unfamiliar concepts by drawing parallels with more familiar ones. As a result of this, metaphors carry the potential to have a significant impact on how society perceives, regulates, and interacts with any given subject.

Metaphors also shape the perceptions and understanding of the field of AI. The metaphors of AI can either clarify the subject, make it more accessible, or obscure its true nature. A poorly chosen or misguided metaphor can have far-reaching consequences for the entire research community.

Consider the analogy between AI and nuclear weapons or open-sourcing an AI model and disclosing sensitive information about biological weapons. These metaphors frighten people, conjuring images of dangerous technologies running wild without oversight. These comparisons, however, used as arguments against open-sourcing AI and in favor of laws that would inhibit some forms of dissemination, miss the point of OS AI practices.

If we have to use metaphors, a better analogy would be publishing a book with instructions on setting up a laboratory for life sciences research. Yes, people with the necessary expertise could explore dangerous substances after they build a laboratory using these instructions, just as malicious actors could modify some OS AI models for unintended or harmful purposes. However, the mere existence of such possibilities should not justify banning or restricting access to the book.

What about precaution?

While the temptation to limit access to open-source AI models due to fear of potential outcomes is, to some extent, understandable, such an approach has far-reaching detrimental consequences. It shifts the focus of policymakers away from the actual harms of biased and flawed AI systems and hinders legitimate research efforts.

Precaution in regulating and implementing new technologies is undeniably prudent, especially when state authorities deploy new technologies with far-reaching implications for people, for example, when law enforcement employs so-called predictive policing methods even though there is a lack of evidence on their effectiveness and accuracy.

However, precaution should not be used as a blanket justification for arbitrary decisions that interfere with rights and freedoms. In the realm of regulating freedom of expression – and regulating how people can disseminate AI models falls within this realm, maintaining proportionality necessitates that legislators ground their decisions in concrete evidence and a comprehensive grasp of potential risks instead of relying on speculation.

Knowledge often is a double-edged sword. However, the connection between open-sourcing AI systems and potential harm hinges on the substantial intent and effort of those with malicious or reckless goals. The risk of harm will arise when individuals or entities, motivated by malice or negligence, modify the models and then use them for unethical or dangerous purposes. To avoid misunderstandings, this is not to say that technology is neutral. Regulating open-source AI requires a balanced approach that recognizes the values that are ingrained in this form of production and is not based on false analogies. Against this backdrop, enacting precautionary legislation that would restrict making OS AI components available for others to study and modify would be disproportionate. It is one thing to advocate for responsible research practices and quite another to call for governments to intervene in research practice. These two types of regulation should not be conflated.

Rather than imposing restrictions on disseminating OS AI models, legislators should focus on making societies more resilient, limiting the spread of harmful content, and, most importantly, preventing actual misuse of technologies and addressing the harms we are already witnessing. This should result in the adoption of proportionate requirements when foundation models are utilized and have an impact on people. These requirements should take into account and differentiate between various applications and development methods, including open-source approaches.

Zuzanna Warso
keep up to date
and subscribe
to our newsletter
Subscribe