In February 2025, at the AI Summit in Paris, Ursula von der Leyen articulated a shift in Europe’s approach to AI. No longer primarily a regulatory or ethical challenge, the Commission President positioned AI as an economic and geopolitical opportunity. Central to the vision presented in Paris was a significant increase in public investment in AI and an infrastructural build-out captured by the idea of “AI Gigafactories”.
The industrial ambition was underscored by a headline figure: a commitment to mobilize €200 billion for AI investment in Europe. The amount combined a €150 billion pledge from the European AI Champions Initiative, with a €50 billion contribution through the Commission’s InvestAI initiative. It was described as the largest public–private partnership in the world for the development of trustworthy AI. In a subsequent press release, the Commission specified that €20 billion would be allocated to fund the first four to five AI Gigafactories.
As we approach this year’s AI summit in Delhi in mid-February, it is worth looking back at the declarations and pledges made in Paris and examining how they have since unfolded for the EU. The question of whether large-scale, EU co-funded AI infrastructure investment is a suitable response to the challenges the EU faces in relation to AI technologies is a fundamental one and will be explored in our Steering AI Investment line of work. This analysis takes a different, but related, starting point. In what follows, I examine how the prioritisation of large-scale compute infrastructure at the core of EU AI investment policy is already being set in motion. I trace how this objective is operationalized through the financial and legal frameworks underpinning the EU’s contribution to the Gigafactories, the role of “technology infrastructure suppliers,” the risks of dependency and lock-in across Gigafactory’s AI hardware and software layers, and the challenge of mobilising demand.
What emerges is a duality in the current approach. On one hand, there is an emphasis on large-scale, centralized investment in AI infrastructure that privileges certain infrastructure providers and development pathways, particularly compute-intensive model training and deployment. On the other hand, the EU-level initiatives aimed at driving AI adoption are far more dispersed. What appears to be missing is coordination between these two strands: an approach that aligns the supply of infrastructure with downstream demand.
The central question is whether, given the political economy shaping AI developments, the industrial policy that the EU currently pursues can help deliver the strategic autonomy and “distinctly European brand of AI” promised in Paris: one focused on applying AI to complex, industry-specific use cases, grounded in Europe’s industrial and manufacturing strengths, cooperative in nature, and embracing open source. Or whether it is becoming an expensive contribution to a trajectory of AI development shaped primarily by a handful of dominant AI companies.
The stakes of these choices are heightened by the material and environmental consequences of large-scale AI infrastructure investment. AI Gigafactories involve long-term commitments around energy consumption, water use, land, and grid capacity, locking in particular technological pathways for years to come. Decisions taken now about where, how, and by whom this infrastructure is built carry implications that extend well beyond industrial competitiveness. They shape the economic, environmental, and social conditions of the regions and communities in which these facilities are located, influencing local resource allocation, infrastructure planning, and public services. Taken together, these effects amplify the political, economic, and governance significance of coordination and prioritisation in EU AI industrial policy.
The AI Continent Action Plan, published two months after the Paris Summit, sheds more light on the idea of AI Gigafactories. Positioned as the next evolutionary step beyond the existing AI Factories – which are upgrades to the EuroHPC supercomputing centres primarily aimed at supporting research, Gigafactories will be large-scale facilities designed to develop and train complex frontier AI models. They will be federated with the existing EuroHPC network of AI Factories to ensure integration and knowledge sharing across the European AI ecosystem.
Each Gigafactory will bring together computing power exceeding 100,000 advanced AI processors, putting them roughly on par with the facilities currently operated by leading AI companies such as Google, Microsoft (and its partnership with OpenAI), Meta, Amazon Web Services, and xAI. The build-out of these Gigafactories is situated within a global geoeconomic context shaped by competition over AI capabilities. The rationale advanced for their necessity emphasises that they are essential for Europe to “compete globally and maintain its strategic autonomy in scientific progress and critical industrial sectors,” thereby reinforcing an “AI race” framing of AI infrastructure investment. The Action Plan acknowledged, but did not address, the environmental implications of such massive computational infrastructure, including energy and water consumption.
On the same day the AI Continent Action Plan was published, the Commission, together with the EuroHPC Joint Undertaking – the public-private partnership created to coordinate the EU’s supercomputing, AI, and quantum infrastructure, launched an informal call for expressions of interest for the establishment of AI Gigafactories. According to a memorandum of understanding recently signed between the Commission, the European Investment Bank, and the European Investment Fund, a formal call for proposals is expected to follow. While the Commission points to strong initial interest from industry, business assessments of the AI Gigafactories initiative remain mixed.
The legal basis for the EU’s contribution to the construction of AI Gigafactories was provided in January 2026, when Council Regulation 2026/150 entered into force, amending the EuroHPC Joint Undertaking Regulation. It channels up to €4.12 billion from Horizon Europe, Digital Europe, and the Connecting Europe Facility-Digital to EuroHPC Joint Undertaking. The progression from the €50 billion headline figure announced in Paris, to the €20 billion earmarked for AI Gigafactories, and finally to the €4.12 billion set out in the Regulation reflects a distinction between political ambition, a more focused commitment to Gigafactories as a priority, and the Union’s currently feasible direct contribution given existing EU budgetary constraints.
The Regulation sets out a specific financial structure: the Union’s contribution is capped at 17 per cent of the capital expenditure (CAPEX) for the overall computing infrastructure of an AI Gigafactory, with JU participating states required to at least match this amount. Private partners from the consortium that will establish and operate the AI Gigafactory cover the remaining CAPEX as well as all operational expenditure (OPEX). In return for its financial contribution, the Union receives a proportional ownership stake in the Gigafactory’s computing infrastructure, maintained for at least five years from the start of operations. Access to the facility is allocated in proportion to financial contributions, with the Regulation specifying priority for certain users: entities governed by public law, industrial users working on EU-funded research projects, and private innovation activities of SMEs and scale-ups. Taken together, this legal and financial structure reflects an effort to balance infrastructure ambitions with budgetary restraint and risk-sharing, while simultaneously seeking to mobilise private investment and preserve guaranteed access for public-interest-oriented uses.
There is a central role in the AI Gigafactory model for suppliers of technology infrastructure, which is the hardware and software systems needed to actually operate a Gigafactory, including advanced AI processors (chips), high-capacity (AI optimized) storage and networking infrastructure, and specialized software for training and deploying AI models.
In practice, this role is likely to be filled by a small number of non-European large technology companies, as they are among the few actors with the capacity to deliver state-of-the-art AI processors and the accompanying infrastructure at scale required for AI Gigafactories. This includes companies such as Nvidia and Google, which combine advanced AI processors with tightly integrated system software and infrastructure capabilities, as well as AWS and Microsoft, which provide large-scale cloud infrastructure and workload orchestration capacity. This concentration of critical compute capabilities and system-level expertise risks further entrenching dependencies and lock-ins across AI hardware and software, shaping technological pathways in ways that are difficult to reverse.
The table below maps the dependencies across the AI hardware and software layers, showing where the risk of dependence on “technology infrastructure providers” is greatest:
| Layer | Components and role in managing AI workload | Dependency risk |
| Hardware Layer (Chips/Compute Hardware) | GPUs, TPUs; provides raw computation for training and running models | High (externally controlled, concentrated supply chain)
Despite the presence of a European supplier of upstream manufacturing equipment (ASML), gigafactories remain fully reliant on non-EU advanced AI processors, with critical chokepoints in US-controlled design ecosystems and Asian manufacturing. |
| System Software Layer | OS kernels, drivers, runtimes, compilers, and libraries; enable, optimise, and coordinate AI workloads by translating models into hardware-specific instructions | High/medium (ecosystem lock-in)
Open-source foundations coexist with proprietary, hardware-specific drivers, runtimes, compilers, and libraries (e.g. Nvidia’s CUDA ecosystems), creating de facto lock-in to hardware-centered software ecosystems. |
| Cloud and Orchestration Layer | Virtualization, containerization, and orchestration tools (e.g. Docker, Kubernetes), and distributed computing frameworks; coordinate the execution, scaling, scheduling, and networking of AI workloads across multiple machines and locations, enabling large-scale training and inference | High/medium (operational concentration)
Open-source orchestration and container tooling reduces licensing lock-in, but operating AI workloads at gigafactory scale remains dependent on US hyperscalers due to their vertically integrated services and operational expertise for large-scale training and inference. |
| Data Layer | Datasets (raw and labeled), storage systems, and preprocessing tools; supplies the input that AI models learn from | Medium (institutional and governance constraints)
Fragmentation across datasets, uneven legal clarity around reuse, and the absence of large-scale storage and data stewardship infrastructures limit the effective mobilisation of data. |
| Model Layer | AI models (pre-trained and fine-tuned) and development frameworks (e.g. PyTorch, TensorFlow); defines model architectures, training procedures, and inference behaviour | High/medium (ecosystem and agenda dependence)
Frameworks such as PyTorch and TensorFlow are open source, but EU developers remain dependent on dominant model ecosystems that define architectures, training paradigms, benchmarks, and release cycles, effectively locking in prevailing conceptions of what AI is and how it should be developed. |
The Regulation recognises that Europe currently lacks the capacity to supply key components of large-scale AI infrastructure and therefore focuses on managing, rather than eliminating, dependencies in the short term. Its core approach is to use governance, ownership, and access rules to reduce the risks associated with vendor lock-in, while allowing Gigafactories to be built using existing state-of-the-art technologies and global supply chains.
At the centre of this approach is the Hosting Agreement between the EuroHPC JU and the AI Gigafactory Coordinator. This contract is designed to protect the Union’s strategic interests by setting conditions on how the infrastructure is owned, operated, and accessed. Where technology infrastructure suppliers are part of a Gigafactory consortium, the Agreement must include safeguards to prevent conflicts of interest and to limit undue influence over the facility’s operation. Where a consortium does not already include a supplier, the Regulation requires that technology providers be selected through open and transparent procurement procedures. These procedures must explicitly take into account supply chain security, resilience, and Union added value. Suppliers may be restricted or excluded altogether if they are assessed as posing risks to the Union’s strategic interests, autonomy, or security. The Regulation also allows the Joint Undertaking to establish framework contracts for essential and high-demand components, including advanced AI processors, reflecting the limited number of suppliers capable of delivering such technology at scale.
In addition, the Regulation places limits on who may exercise strategic control, as technology suppliers cannot lead Gigafactory consortia. The coordinating entity must be headquartered in the Union and remain under the control of EU-based actors. This is meant to ensure that, even where critical hardware and software are sourced externally, decision-making authority remains European.
Fundamentally, in the context of the current, compute intensive paradigm of AI development, reflected in the creation of the Gigafactories, the EU’s approach to addressing the lack of strategic autonomy across hardware and software is to contract and govern around existing structural technological dependencies, with the expectation that greater autonomy can be progressively built over time through local chip design, open software environments, secure EU infrastructure, federated data access, and the development of frontier AI models. This assumption may prove difficult to realise given the depth and persistence of those dependencies.
The viability of the large-scale AI infrastructure ultimately will depend on the ecosystem that uses it. In their recent analysis, Julia Christina Hess and Felix Sieker argue that AI Gigafactories can achieve economic viability through two different models. The first is an anchor-customer model, in which one or a small number of advanced AI labs, such as OpenAI or Google DeepMind, generate sustained, high-intensity compute demand. The second is a multi-client model, in which Gigafactories serve a broader set of users with low to moderate AI workloads across different applications.
At present, Europe lacks a sufficient number of AI labs operating at the scale required to function as anchor customers in the first sense. As a result, the second model appears more plausible. Hess and Sieker make a case for dedicating compute to Public AI models or pursuing sectoral approaches. In this scenario, it is not a single dominant user but a network of users and providers that will determine whether Gigafactories deliver outcomes that are meaningful in light of their stated objectives.
The risk is that if EU Gigafactories fail to attract sufficient home-grown demand, they will either become what Andrea Renda and Nicoleta Kyosovska have described as “cathedrals in the desert” or be pushed to lease capacity to “big players” to remain financially viable. In such a scenario, public contribution to the Gigafactories would effectively subsidise the computing costs of companies that the EU is seeking to reduce its dependency on.
The home-grown demand side of the equation is a concern that the Apply AI Strategy, adopted on 8 October 2025, is meant to address. It outlines an ambition to accelerate AI adoption across European industries and identifies eleven priority sectors with very different technological capacities, business models, and demand profiles, spanning healthcare, robotics, and defence and security to energy, climate, agri-food, and the public sector. The Strategy also introduces cross-cutting instruments, including, for example, a network of more than 250 European Digital Innovation Hubs refocused as “Experience Centres for AI.” To support these objectives, the Strategy declares to mobilise approximately €1 billion from EU funding programmes, including Horizon Europe, the Digital Europe Programme, EU4Health, and Creative Europe. This funding is additional to the €4.12 billion channelled through the EuroHPC Joint Undertaking for the build-out of the AI Gigafactories.
Actions implementing the Apply AI Strategy are being rolled out through a mix of established EU funding programmes and newer thematic initiatives and investment umbrellas. These efforts build on, and in some cases overlap with, earlier Commission actions, including, for example, the GenAI4EU initiative launched under the 2024 AI Innovation package, which seeks to boost AI adoption among EU companies through approximately €700 million in funding across fourteen priority areas. The emerging funding reflects a familiar feature of EU research and innovation policy: dispersed funding, relatively loose coordination, and ongoing challenges in translating strategic priorities into consistently aligned investment decisions and outcomes.
A recent Commission press release announcing “over €307 million for AI and related technologies” illustrates this dynamic. The funding takes the form of two bundles of calls that were initially published under the Digital, Industry and Space cluster (Cluster 4) of the Horizon Europe Work Programme in late 2025. Of this amount, €221.8 million is distributed across fifteen calls framed around “trustworthy AI services,” innovative data use, and strategic autonomy. This bundle brings together a strikingly heterogeneous set of topics. Under the same funding umbrella sit calls on two-dimensional materials, quantum photonics, inertial navigation sensors, semiconductor regional cooperation, virtual worlds, and Web 4.0 architectures, open internet stack components, data access mechanisms, robotics for manufacturing, digital twins for early warning systems, alongside multiple other strands of the Apply AI agenda.
This breadth is not inherently problematic; it reflects the EU’s commitment to supporting a wide technological base. However, it does complicate claims of strategic focus. In this context, dispersion becomes a liability. When AI-related investment is spread across loosely connected projects and numerous consortia, it becomes difficult to build, reinforce, and sustain specific capacities deliberately and cumulatively over time.
While the EU has been constructing its public funding architecture, private capital has been moving along its own trajectory. This trajectory provides a useful counterfactual for assessing what role public investment can (and cannot) realistically play in shaping the EU AI ecosystem.
In 2025 alone, Nvidia significantly expanded its investment in European AI startups and infrastructure companies, fuelling what has been widely described as an AI bubble. Nvidia’s European portfolio includes, among others, frontier model developers Mistral AI in France and Black Forest Labs in Germany. Nvidia has also partnered with a range of European “sovereign” and open-source model builders, providing technical support to optimize models for its hardware and software stack and to deploy them through Nvidia-affiliated cloud infrastructure. These engagements go beyond capital, combining funding with technical collaboration, including integration into Nvidia’s hardware and software ecosystem. From Nvidia’s perspective, this strategy serves clear commercial objectives: it expands long-term demand for its advanced AI processors and shapes which European companies and projects are able to scale, and on what technical foundations that growth is built.
This raises a somewhat uncomfortable question about what “European AI” comes to mean in practice. In the current reality, it refers to companies headquartered in Europe, often with European founders and significant European operations, developing advanced AI technologies that rely on and are optimised for US-based platform and hardware infrastructure. This outcome sits uneasily alongside the sovereignty vision articulated in Paris, which emphasised a reduction in structural dependency.
Given the pervasive dependencies on technology infrastructure across the “AI stack,” and the scale advantage of private capital, which increasingly ties many of the EU-headquartered AI labs to global, predominantly US-led infrastructure ecosystems, what role can public investment realistically play in shaping the EU AI ecosystem in the coming years?
The traditional approach of dispersing funding across a wide array of priorities, spanning nearly all sectors, is difficult to reconcile with the ambition of building a “distinct European brand of AI” that is “a force for good,” as declared by Ursula von der Leyen in Paris. The proliferation of initiatives and hundreds of relatively small grants undermines efforts to translate this high-level ambition into a coherent and directional investment strategy. Diffuse public funding provides limited leverage to influence which AI systems are built, how they are deployed, and whose interests they ultimately serve. These constraints are further reinforced by the fact that the underlying conditions of AI development are largely pre-set by a small number of technology infrastructure providers. Even where the EU seeks to attach conditions to publicly supported facilities, it does so within a technological paradigm centred on large-scale, compute-intensive AI development. Absent a more deliberate approach, the European AI ecosystem is therefore likely to continue being shaped primarily by concentrated global private capital.
Within the current policy trajectory the EU has set for itself, reflected in the Gigafactories initiative, the alternative is to align infrastructure ambition with a more explicit strategic focus on ecosystem development. This would involve targeting a limited number of priority applications or sectors, perhaps two or three rather than eleven or fourteen, for concentrated public investment, coordinated with Member States and like-minded partners. It would require moving away from broad portfolio approaches toward more deliberate choices about AI priorities.
There already exist legal frameworks and potential instruments for this kind of prioritisation, including the Important Projects of Common European Interest (IPCEIs) – with the Joint European Forum for IPCEI – and the European Digital Infrastructure Consortia (EDICs), which enable concentrated, cross-border investment in strategic priorities. In principle, these instruments could help move beyond fragmented funding and support capabilities that require scale and coordination. At the moment, however, coordination between these instruments and the policy priorities set out in the AI Continent Action Plan and the Apply AI strategy remains very limited. Policy documents name-check IPCEIs and EDICs, but they have yet to be integrated into core implementation pathways.
A more explicit prioritisation and a more strategic focus of the EU AI industrial policy would be politically contentious, as it would require moving beyond the familiar rhetoric of “levelling the playing field” toward more openly “picking winners.” Without such a recalibration, Europe risks building world-class AI infrastructure while retaining little influence over the direction of AI development or the interests it ultimately serves.
Beyond the political difficulty of prioritisation lies a deeper question about what kind of AI development Europe is choosing to prioritise in the first place. The current emphasis on large-scale, compute-intensive AI, which is closely tied to the present wave of foundation and large language models, is not uncontested. Uncertainty about the long-term productivity gains and economic value of this trajectory is compounded by its acute and well-documented societal and environmental costs. Other pathways for AI development remain comparatively underexplored. The EU’s current bet on large-scale compute infrastructure is therefore not the only possible path forward, and should neither be treated as inevitable nor as an exhaustive vision for Europe’s AI future.