Undermining the foundation of open source AI?

May 11, 2023 by: Paul Keller
Today, the European Parliament's IMCO and LIBE committees adopted their joint report on the proposed AI Act. The text includes additional safeguards for fundamental rights and an overall more cautious approach to AI. In this post, we provide an in-depth analysis of the implications of the text for open source AI development.

Friction and AI Governance: through the Scale of the City

May 10, 2023 by: Nadia Nadesan
AI governance captures a wide range of processes and conversations, from internal governance policies for companies to public, national, and transnational regulatory bodies. For this blog series, I intend to map places where friction in this seemingly effortless and inevitable flow of technology occurs.

How Wikipedia can shape the future of AI

May 4, 2023 by: Alek Tarkowski
The following piece is the first part of a case study on how Wikipedia is positioned to address the challenges of open AI development. It spells out the general argument, which will be followed by more specific suggestions on how a wikiAI mission could look like.

Response to European Commission’s call for evidence on “virtual worlds”

May 3, 2023 by: Zuzanna Warso et al.
Open Future submitted a response concerning the European Commission's call for evidence on "An EU initiative on virtual worlds: a head start towards the next technological transition". In the submission, we showed why policymakers should consider virtual worlds as Digital Public Spaces where the public interest takes precedence over corporate objectives.

Concerns Over the Impact of the AI Act on Open-Source R&D: LAION’s open letter

May 2, 2023 by: Zuzanna Warso
Establishing a regulatory framework that achieves the dual objectives of protecting open-source AI systems and mitigating risks of potential harm is a critical imperative for the European Union. Especially since open-source, publicly supported AI systems are crucial digital public infrastructures that would ensure Europe’s sovereignty. 

Meet Alicja Peszkowska, the new Engagement Lead

April 26, 2023 by: Open Future
We are pleased to announce that we have welcomed Alicja Peszkowska to the team, as our Engagement Lead. In this role, Alicja will oversee the strategic and operational engagement of Open Future with its primary stakeholders as well as manage our digital communication.

Small is beautiful, but can it scale?

April 20, 2023 by: Paul Keller
The Initiative for Digital Public Infrastructure published the Three-Legged Stool manifesto, outlining three pillars for building digital public infrastructure.

LAION petitions for a European public AI mission

April 13, 2023 by: Alek Tarkowski
The LAION proposal calls for a public research facility capable of building large-scale artificial intelligence models. It offers an alternative to corporate development of AI, in which responsible use is ensured in open source environments through the involvement of democratically elected institutions.

AI is already out there. We need commons-based governance, not a moratorium

March 30, 2023 by: Alek Tarkowski
The Future of Life Institute published an open letter asking for a moratorium on generative AI development. Yet social harms caused by AI will not be addressed in this way. Instead, commons-based governance of existing AI systems is needed.

To succeed, data commons advocates must address privacy concerns head-on

March 2, 2023 by: Zuzanna Warso
The EP's ITRE Committee has adopted its report on the Data Act. The report shows a backlash against B2G data sharing, combining concerns about personal data protection and the impact on business, competition and innovation, preventing our proposal for a more robust B2G data sharing framework and a public data commons from gaining traction across the political spectrum.

Protecting Creatives or Impeding Progress?

February 17, 2023 by: Paul Keller
As generative machine learning (ML) becomes more widespread, the issue of copyright and ML input is back in focus. This post explores the Eu legal framework governing the use of copyrighted works for training ML systems and the potential for collective action by artists and creators.