Last week, Stanford HAI, Stanford CRFM, Princeton CITP, and RegLab released a Policy Brief on Considerations for Governing Open Foundation Models.
The brief outlines the current evidence on the risks of open foundation models (FMs) and offers some recommendations for policymakers on how to think about the risks of open FMs. The brief argues that open FMs – defined by the authors as “models with widely available weights” — “provide significant benefits by combating market concentration, catalyzing innovation, and improving transparency.” The authors therefore conclude that “policymakers should explicitly consider the potential unintended consequences of AI regulation on the vibrant innovation ecosystem around open foundation models.”
The policy brief also points out that despite widespread concern about the dangers of open foundation models that has dominated policy discussions, “the existing evidence on the marginal risk of open foundation models remains quite limited. The key questions for understanding their impact are the risks posed by open models relative to the risks posed by other models (the marginal risk):
To what extent do open endowment models increase risk relative to (a) closed endowment models or (b) pre-existing technologies such as search engines?
Coming a few days after the final compromise on the EU AI Act, the policy brief provides further support for the AI Act’s approach of providing targeted exemptions for open (source) AI developers. The final compromise on the AI Act sidesteps policies – liability for downstream harm, licensing of model developers – that the authors of the policy brief see as particularly problematic for open AI developers. As we argued in our analysis of the Act, the overall approach to open source AI development in the AI Act is quite sound, although there is still room for improvement by getting some of the details of the transparency obligations right.