Ada Lovelace publishes expert opinion on regulating AI in Europe

March 31, 2022

The UK-based Ada Lovelace Institute publishes an expert opinion on the AI Act authored by Lilian Edwards that — among other issues — criticizes the approach to GPAI models included in the Slovenian Presidency proposal:

We are pleased to note in the draft Council position (Article 52a) that it is clarified that any person (a deployer, in effect) who ‘puts into service or uses’ a general-purpose AI system for an intended high-risk purpose comes under duties to certify confirmation with the essential requirements of Chapter III and does not seem to need a ‘substantial modification’. This is clearly aimed at catching the downstream adapter or deployer.

But in so doing, the text seems to have done nothing to meet the problem that the user/deployer almost certainly lacks mandatory access to training-set or testing data, or ability to compel changes in the upstream service (unless these are built in as rights into a contract which is highly unlikely, especially when there are chains of providers as in example 1). At the same time, the Council proposal removes liability for the upstream provider of the general-purpose AI (Article 52a (1)). This exculpates the large tech suppliers like Amazon, Google and Microsoft, whose involvement in certification of AI as safe is, as discussed above, vital, since they have effective control over the technical infrastructure, training data and models, as well as the resources and power to modify and test them.

keep up to date
and subscribe
to our newsletter
Subscribe