Blog

Falcon 180B, open source AI and control over compute

October 25, 2023 by: Alek Tarkowski
This opinion takes a closer look at how the Falcon 180B model is licensed and is a part of our exploration of the emergent standards for the sharing of AI models.

Code is speech, and speech is free

October 12, 2023 by: Zuzanna Warso
Some experts believe that open-sourcing AI increases the risk of malicious use. In this opinion, we argue that calls for regulators to intervene and limit the possibility of open-sourcing AI models must consider the impact on freedom of expression.

Open Source, AI and the Paradox of Open

September 15, 2023 by: Zuzanna Warso et al.
We agree with Widder, West, and Whittaker that openness alone will not democratize AI. However, it is clear to us that any alternative to current Big Tech-driven AI must be, among other things, open.

We need frameworks that balance sharing and consent

September 7, 2023 by: Alek Tarkowski
There is a growing need for a new set of community-based principles and governance framework for the digital commons, which combines the achievements of free culture with care for other rights and balances sharing with consent. 

Friction and AI Governance: Experience from the Ground

August 23, 2023 by: Nadia Nadesan
In this article, Open Future fellow Nadia Nadesan shares learnings from facilitating a citizen assembly with Algorights to investigate local participation concerning the AI Act. 

The Mirage of Open-Source AI: Analyzing Meta’s Llama 2 Release Strategy

August 11, 2023 by: Alek Tarkowski
In this analysis, I review the Llama 2 release strategy and show its non-compliance with the open-source standard. Furthermore, I explain how this case demonstrates the need for more robust governance that mandates training data transparency.

Supporting Open Source and Open Science in the EU AI Act

July 26, 2023 by: Paul Keller
Today — together with Hugging Face, Eleuther.ai, LAION, GitHub, and Creative Commons, we publish a statement on Supporting Open Source and Open Science in the EU AI Act. We strongly believe that open source and open science are the building blocks of trustworthy AI and should be promoted in the EU.

Stewarding the sum of all knowledge in the age of AI

July 7, 2023 by: Alek Tarkowski
We need a more holistic approach that considers how machine learning technologies impact Wikimedia — changes to editing, disintermediation of users, and governance of free knowledge as a resource used in AI training. These changes call for an overall strategy that balances the need to protect the organization from negative impact and harms with the need to deploy new technologies in productive ways to help build the digital commons.

The launch of Threads is an opportunity for public institutions to embrace the fediverse

July 6, 2023 by: Paul Keller
Meta's entry into the space and the fact that they have chosen to be interoperable with the existing fediverse could be a good thing, as it paves the way for public institutions to enter the space and reduce their dependence on private communication platforms.

Friction and AI Governance: Institutional Intermediaries

July 3, 2023 by: Nadia Nadesan
This article examines an example from the global women's rights movement of how organizations and institutions support local actors to participate in transnational AI governance and challenge top-down structures and mechanisms.

AI, the Commons, and the limits of copyright

June 22, 2023 by: Paul Keller
There has been a lot of attention on copyright and generative AI/ML over the last few months. In this essay, I propose a two-fold strategy to tackle this situation. First, it is essential to guarantee that individual creators can opt out of having their works used in AI training. Second, we should implement a levy that redirects a portion of the surplus from training AI on humanity's collective creativity back to the commons.
keep up to date
and subscribe
to our newsletter
Subscribe