Purpose, Not Power: Rethinking Europe’s Apply AI Strategy

Opinion
October 20, 2025

The European Commission’s Apply AI Strategy, published on October 8, 2025, marks the next phase of Europe’s AI policy ambitions. Building on the AI Continent Action Plan announced six months earlier, it focuses on the deployment of AI across strategic sectors through eleven sectoral flagship programs. Three supportive measures concern AI adoption by SMEs, AI literacy, and a Frontier AI initiative focused on providing advanced AI capabilities. The strategy assumes that AI has transformative potential that will enhance the competitiveness of European industries and unlock broader societal benefits.

In a previous analysis of the AI Continent Action Plan—of which the Apply AI strategy forms a part—I wrote that the Action Plan can be considered a holistic approach to strengthening the EU’s capacity to build and deploy AI systems, also for public interest goals. Yet, if the ambition is for AI systems to function as public digital infrastructure, a stronger vision of purposeful AI deployment is needed. The same can be said for the new Apply AI strategy.

In our submission to the Apply AI consultation, Open Future proposed three principles for applying AI in Europe: support and use of public AI infrastructure, purposeful AI deployment, and ensuring its sustainability. The announced Strategy represents a step forward in addressing some of the concerns that we raised. It makes progress on policies that establish European public AI infrastructure, and thus reduces dependencies on concentrated AI power. Where it falls short is in ensuring purposeful deployment, and it remains silent on sustainability concerns. As such, the strategy could lead to Europe joining the global “AI race” on terms still set by dominant commercial players.

Frontier AI Initiative as a public AI strategy

Building public AI capacity means reducing dependencies on dominant commercial players by establishing independent means to deploy AI systems. Europe’s AI strategy needs to be based on its own, viable AI infrastructure. Only in this way can solutions be built and deployed that are not based on a commercial logic. Otherwise, applying AI in Europe will mean serving the role of sales representatives of the largest AI companies.

The Frontier AI initiative—one of the strategy’s supportive measures—represents significant progress on this principle.

“For the EU, it is a priority to ensure that European models with cutting-edge capabilities reinforce sovereignty and competitiveness in a trustworthy and human centric manner.”

A commitment to build open frontier models is in line with recommendations that we made in our white paper on public AI, published with Bertelsmann Stiftung as part of their reframe[Tech] project. By connecting this initiative with the already deployed AI Factories and the planned Gigafactories, the strategy addresses a key concern: that Europe has been funding compute capacity without being clear about its purpose.

Building open frontier models is one of the best uses that can be made of public compute. However, the strategy could be far bolder in establishing open source development as a core principle. America’s AI Action Plan, announced by the United States in August, has an agenda item titled “Encourage Open-Source and Open-Weight AI”, aimed at creating a supportive environment for both open research and commercial development.

The European strategy is much less clear. It lacks both a bolder commitment to open-sourcing AI, and a reference to AI Act’s definition of open source AI. Open-source development is mentioned in some of the eleven flagships, including the public sector one. But it is missing from others, suggesting a potential piecemeal approach. The Frontier AI initiative should be clear about supporting open-source—or at least open-weight—AI development.

The Strategy encourages “integrating AI building on European solutions,” but it stops short of a clear commitment to deploy open-source AI. Given today’s dearth of competitive European AI stacks, there’s a real risk that—even if Europe funds new frontier models—public bodies and firms will still adopt turnkey systems from dominant providers. Because the Frontier AI initiative will take time to deliver usable public models, the interim will favor commercial vendors, deepening dependency unless the EU explicitly prioritizes open-source deployment (through procurement, infrastructure, and portability requirements).

Finally, the strategy should be clearer about the advantages of open source solutions, which go beyond the oft-mentioned “competitiveness and sovereignty”. These include greater transparency and security, increased innovation typical of open research, opportunities for multilateral cooperation with like-minded nations, and safeguarding European values and cultural diversity.

Public AI comes in different shapes and sizes

The strategy correctly notes that AI systems may take various forms – but stops short of outlining an AI development program that builds not just frontier AI, but an open ecosystem of other models and solutions, including research and development of both small and specialized models. In this context, it is worrying that the Strategy even mentions Artificial Generative Intelligence (AGI). European frontier model development should instead commit to a normal AI policy framework, which argues that AI development will be slow and economic impact gradual – while still paying attention to systemic risks of AI systems.

The strategy acknowledges challenges with access to useful, high-quality data, which will be crucial for AI applications. These are to be addressed by the Data Union Strategy, which will be made public in mid-November. Nevertheless, missing from the Apply AI strategy is an acknowledgement that there are structural challenges to the sustainability of information production across the various sectors. These will become even more salient once AI solutions are deployed.

The strategy also lacks clarity on how the frontier model will be deployed and governed. The connection made to the AI in Science strategy is correct—public AI should indeed support research and innovation. But the sectoral structure of the strategy leaves questions about governance mechanisms, access conditions, and how different communities will be able to use this public infrastructure. For public AI infrastructure to truly serve the public interest, it must be governed transparently and democratically, not simply as a resource managed by technical experts.

What’s the purpose of European AI?

In our submission, we also called for purposeful AI deployment—ensuring that AI is deployed based on clear evidence of its benefits and alignment with public interest goals. We argued that to avoid technosolutionism, applied AI solutions need to address real societal needs and serve communities across Europe.

This is where the Apply AI strategy is the weakest, as it rests on a principle of “AI first policy”. It is formulated vaguely, like many of the key elements of the strategy, as encouraging European businesses and organizations to “integrate AI building on European solutions”. The principle suggests that European policymakers treat AI technologies as having an inherent capacity to positively transform various areas of life. This risks the pursuit of “AI without purpose”: technological solutions in search of problems to solve. Since the strategy refers to the AI Act and its framework of “human-centric and trustworthy AI”, simultaneous efforts to delay the implementation of the Act create a risk that these principles will not be brought to life.

The sectoral flagship programs read more like aspirational visions than evidence-based policy interventions, with proposals for building various AI toolboxes and platforms. There are lessons to be learned here from the data spaces initiatives, which after five years have not delivered on their promise—yet they are referenced as a building block for this strategy.

A successful AI strategy hinges on strong governance capable of creating precise and purposeful deployment roadmaps.  Instead of a simplistic—and possibly dangerous—”AI first” rule, purposeful deployment demands the willingness to identify cases where AI is not appropriate, focusing deployment efforts only where they provide clear value. Contrary to the proposed “AI first” rule, Europe should look beyond the AI development agenda and consider other tools for digital transformation. Policy goals can be achieved with much smaller investments into public digital infrastructure. For example, alternative social networking services.

The Strategy is also silent on the issue of sustainability, a key aspect of purposeful deployment. Detailed recommendations on this matter were proposed in the joint statement “Within Bounds: Limiting AI’s environmental impact”. Given the enormous energy consumption and environmental impact of training and running large AI models, this omission is troubling—especially in a strategy that commits to an investment of 1 billion Euros. Structural drivers behind unchecked digital expansion need to be acknowledged and addressed. In our recent report with ECOS, “How data center expansion risks derailing climate goals and what to do about it”, we offer recommendations for embedding the principles of sufficiency, circularity, and transparency.

Sovereignty without joining the AI race

If the Commission’s plans materialize, Europe has a reasonable chance of creating its own models as public AI infrastructure by orchestrating research efforts and making effective use of investments in public compute.

However, the strategy’s lack of clarity on purposeful deployment and sustainability means this infrastructure still risks becoming a mere tool in an AI scaling race, in which the dominant commercial players call the shots. This development path is inherently costly, unsustainable, and risks cementing the very dependencies Europe seeks to escape.

Benjamin Bratton recently wrote about Europe’s chances of building its own AI systems, and described open source as “intrinsically anti-sovereign technologies” that work “for anyone, anywhere, for any purpose”. By thinking of AI as a digital public good, Europe can steer away from a “sovereignty trap” that is increasingly visible in AI policy debates.

The open-source development model offers a form of sovereignty based on partnerships and mutual capability, instead of chauvinism and asymmetrical power. The success of the European path to AI should depend not on the scale of its compute power, but on the purpose and public nature of its AI infrastructure. Instead of opting for an “AI first” approach, fueled by ever-increasing compute investments, Europe must constantly ask itself: What public value do we need to create? And only then, what digital tools—AI or otherwise—will best achieve it?

Alek Tarkowski
keep up to date
and subscribe
to our newsletter
Subscribe