Copyright, AI, and the Limits of Voluntary Licensing

Analysis
March 10, 2026

Today, the European Parliament adopted Axel Voss’s own-initiative report on copyright and generative AI—a document that has changed in important ways since its June draft, and not only in the ways that received most attention. In our first look at that draft, we argued that Voss was circling back, somewhat reluctantly, to the very framework he had helped create: an opt-out-based exception for text and data mining that covered AI training already, and that the real challenge was not to replicate that structure but to address what it leaves out—primarily fair remuneration and meaningful transparency. The adopted report moves on both of those fronts, though less decisively than the political moment requires.

While the report correctly identifies a genuine problem—the absence of sustainable remuneration for the use of copyrighted works in AI training—its recommendations largely assume that functioning licensing markets can emerge within the current legal framework. That assumption does not hold up to scrutiny. The structure of AI training and the incentives created by the TDM exception make such markets difficult to sustain in practice.

A reluctant reckoning

The most significant change between the June draft and the text the Parliament adopted is a welcome, if hedged, acknowledgement that AI training is covered by the TDM exceptions under the CDSM Directive. Where the draft spent considerable energy arguing that Article 4 did not apply to generative AI training—only to propose a replacement mechanism with essentially the same structure—the adopted text accepts the applicability of the existing framework while calling for “swift clarification on its application and implementation.” This is not a ringing endorsement of the framework, but it is a meaningful retreat from the position that the existing law is simply inadequate. It also closes off a line of argument that would have created significant legal uncertainty for European AI developers more broadly, including researchers, cultural heritage institutions, and developers of public-interest AI systems.

Equally important is the positive framing of the space created for public-interest AI development under Article 3 of the CDSM Directive. Recommendation 4 calls on the Commission to ensure that activities conducted for scientific research or educational purposes—specifically by research organizations and cultural heritage institutions, and in the framework of non-commercial innovation—are not restricted. This is a meaningful acknowledgement that the report’s framework is not intended to sweep away existing protections for public-interest uses. It aligns with the argument we have made in our Beyond AI and Copyright white paper that Wikipedia, open-access publishing, and other commons-based contributions play a central role in the information ecosystem, even as the regulatory debate remains focused almost exclusively on the interests of commercial rightholders.

A market failure misread as a legal gap

The adopted report identifies its objective clearly enough: a functioning remuneration mechanism for the use of copyrighted works in AI training, underpinned by transparency obligations and a licensing market that restores the bargaining power of rightsholders. What it does not provide is a theory of change for how to get there. The recommendations—spanning voluntary collective licensing, a EUIPO-managed opt-out registry, sector-based licensing agreements, transparency obligations of varying scope, a rebuttable presumption of use, and calls on the Commission to assess “the necessity and feasibility” of various mechanisms—reflect a recognizable policy logic. But they accumulate across more than twenty numbered recommendations without ever being presented as a framework: without specifying how the instruments relate to each other, in what sequence they should be pursued, or what happens when they point in different directions.

This is not untypical for own-initiative reports. But it is still noteworthy that the report repeats its call for voluntary collective licensing roughly four times across different recommendations—signalling to the Commission where the centre of gravity of Parliament’s position lies, while leaving the actual design of that mechanism entirely open. The cumulative effect is a document that reads more as a diagnosis than a prescription: it identifies what is broken without saying how to fix it.

The deeper problem is that the report also misdiagnoses why the licensing market it envisions has not emerged. Nothing in the existing legal framework prevents rightholders from collectively negotiating licences with AI developers today. The reason such agreements have not materialized at scale is not a missing legal instrument but a structural imbalance in bargaining power: AI developers have had little incentive to seek licences when training on publicly available content carries limited legal risk. Greater transparency could in principle shift that calculus—by making non-compliance more visible and legally exposed—but the report does not make that causal connection explicit. Calling for voluntary collective licensing without addressing the underlying power asymmetry is, at best, optimistic.

This matters most when it comes to the core question of remuneration. The report is clear that some form of remuneration mechanism is needed. It is equally clear—and this is a significant change from the June draft—that it opposes “any proposal for a framework based on AI providers obtaining a global licence for training their GenAI models in exchange for a flat-rate payment.” But this leaves open almost every question that matters: who pays, on what basis, how amounts are determined, and who receives compensation. The report’s preference for a licensing market as the practical vehicle for remuneration runs into the same structural problem: licensing markets organized around individual acts of reproduction are unlikely to produce fair or comprehensive outcomes. They will tend to benefit the largest and best-organized rightholders, while leaving individual creators, smaller publications, and public-interest contributors outside the system entirely.

A fragile media ecosystem deserves better

Press and news media face real and urgent challenges from generative AI—the erosion of traffic, the reproduction of their content in AI-generated summaries, and the risk of being systematically undercut in their primary markets. Protecting a pluriform media landscape is one of the most legitimate concerns raised by the AI and copyright debate. The most problematic element of the adopted report, however, is precisely where it addresses this concern. The report calls on the Commission to explore extending ancillary rights for press publishers, journalists, and news broadcasters to cover AI training, inferencing, and retrieval-augmented generation, with such uses requiring “explicit consent” and rights holders having “full control” over their content.

Copyright has never been about giving rightholders “full control” over the use of their works. It is about establishing a balance between the interests of rightholders and broader societal interests—in access to knowledge, in freedom of expression, in innovation. A remuneration right respects that balance: it ensures creators are compensated while preserving the ability of others to build on their work. Requiring explicit consent for each step of accessing information through AI-powered information retrieval systems is not. Such a requirement is an exclusive right in all but name. If AI training falls under Article 4 of the CDSM Directive—which the report now (grudgingly) accepts—then requiring explicit consent for that use is directly inconsistent with the structure of the exception, which allows for opt-out but does not require opt-in. Extending that logic to inference and retrieval-augmented generation would be even more far-reaching, effectively removing press and news media content from the scope of the TDM exceptions entirely.

The report’s preferred solution for press publishers—voluntary collective licensing with a presumption of collective rights management—is a more workable framing, but it sits in tension with the “full control” and “explicit consent” language that surrounds it. This tension also reveals a broader weakness in the report’s approach: its reluctance to endorse anything stronger than voluntary mechanisms. If the structural imbalance in bargaining power is real—and the report’s own diagnosis suggests it is—then voluntary frameworks, however well designed, will not be enough.

A mandate the Commission should not waste

The Parliament’s report creates an opening that should be taken seriously. Its acknowledgement, in recommendation 17, of “the sustainability of the public information ecosystem” as an explicit objective is politically significant—it goes beyond the usual framing of copyright as a tool for protecting individual rightholders and gestures toward the broader structural challenge that generative AI poses.

But the opening will only produce useful results if the overdue regulatory intervention in this space moves beyond the licensing logic that dominates the report’s recommendations. The core problem with a licensing framework tied to acts of reproduction—beyond the distributional inequities noted above—is that it attaches the obligation to the wrong moment. The economic value generated by training on publicly available information becomes visible only at the point of market deployment, not at the point of training. If the core difficulty lies in the mismatch between large-scale data use and reproduction-based licensing, alternative remuneration models may be more effective. One such model is a levy linked to the revenue generated by deployed AI services or systems rather than individual acts of reproduction during training. Such a levy would attach the obligation to where the value actually materializes, operate on the same assumption of mass use that underpins the report’s rebuttable presumption on transparency, and eliminate the need for the granular transactional transparency that the report calls for and that is likely unachievable in the light of the scale of content being used to train the current generation of AI systems.

As we have argued in our Beyond AI and Copyright white paper, there is also a structural argument for the levy approach that goes beyond copyright logic. The report’s framework, however it is implemented, will primarily benefit organized rightholders with the market power to negotiate or enforce licensing agreements. It will not, on its own, address the contributions of commons-based projects, open-access repositories, public service media organizations and cultural heritage institutions, or the many individual creators whose work circulates without commercial intermediation. Any remuneration mechanism that claims to address the sustainability of the information ecosystem must reach these contributors. A redistributive levy—with a governance framework that includes public interest and commons-based actors in the distribution of proceeds, and that directs part of the revenues toward building public AI infrastructure—offers a fundamentally more honest response to the scale of the challenge than a licensing market organized around copyright ownership.

The Parliament has correctly identified the problem. This should give the Commission the political cover to deliver on a strategy that strengthens the information ecosystem and creates a level regulatory playing field that takes its own AI ambitions seriously.

Paul Keller
keep up to date
and subscribe
to our newsletter
Subscribe