Friction in AI Governance: Performing Participation

March 4, 2024

In her last post, Friction and AI Governance: There is more to it than Breaking Servers, Nadia examined collective bargaining as an essential element of AI governance. In this article, she takes a closer look at and debunks a few popular participation practices. You can read more about Nadia and learn about our fellowship program here.


Why do AI systems need participatory governance?

Despite the amount of data, capital, and power behind AI, scholars, and experts from Kate Crawford, Yeshimabiet, Timit Gebru, to Joy Buolamwini and others, have been vocal about AI reinforcing inequalities across race, gender, economy, geography, religion, etc. A decade ago, the buzz around big data made it seem as if the bias problem would be resolved by better and bigger data. It hasn’t happened. The biases in AI perpetuate the structural matrices of discrimination and oppression that invisibilize people who do not conform to Eurocentric white paradigms, from their bodies to their outdated philosophies and frameworks about how the world works.

Consider the datasets we simply cannot let go of, such as the database of mugshots from the National Institute of Standards and Technology (NIST). In Kate Crawford’s research for her book “The Atlas of AI,” she highlights how these mugshots form part of an infrastructure of biometric data that post 9/11 was then repurposed to track and verify people leaving and entering the United States to perpetuate new systems of surveillance. The cases listed are not just a few bad apples; the challenge would be to find a few good ones. These AI systems, through a lack of more rigorous standards for data and modeling, reflect histories meant to uphold structural discrimination and inequality.

Participation performance

To confront AI that has malicious intent or poor quality, civil society, private corporations, and governments have made proposals ranging from more diverse hiring and more participatory design to more participatory and ethical governance. Eryk Salvaggio in the January AI and the Commons Community Call hosted by Open Future, proposed treating training data sets like archives, spaces that explicitly involve participation through contestation, conversation, and curation. The possibility of this kind of space is possible. There has been work around participatory digital archives, and it wouldn’t be a stretch of the imagination to connect it to how data is trained for AI and build off existing models of participatory governance to create more ethical AI.

For example, the open archive, Archive of Our Own, involves community participation in its governance. A case study by Casey Fiesler, which examines this archive, explains how it is unique because it explains the history and formation of the project’s governance:

While many of these proposals and resulting initiatives might possess noble intentions, too many elude putting into practice systems that materially change the industry or provide recourse for past and present harms inflicted by AI.

Actions or initiatives such as creating ethical principles that appear to promote progress actually might be considered a mere performance or participation washing that bears little impact on actual moments of decision making and governance processes of AI. This could be compared to security theatre where there is visibility and a sense of security around safety measures with the harm being that do very little to achieve either.

Ethical Principles

One of the most prolific forms of more ethical or participatory AI governance has been the development of AI principles or ethics. Luke Munn authored the paper “Uselessness of AI Ethics” which makes several points that demonstrate the performative nature of creating ethical AI principles, guidelines, and frameworks. He lists that many (50 since the publication of his paper) come from government bodies and agencies. Even the Vatican has come out with the “Rome Call for AI Ethics.” Overall, Munn has summarised the values behind these principles which fall under the following broad themes: beneficence, non-maleficence, autonomy, justice, and explicability. However, there is an ever-growing chasm between these principles and putting them into practice. The greatest impact of principles, guidelines, and frameworks in AI governance is informing or consulting design and decision-making.

Creating, convening, and producing these frameworks becomes a performance of work towards AI governance that brokers a compromise between an attempt to make more humane technology and allowing business as usual to continue.

Munn points out that AI ethics and principles are often “toothless” and are difficult to operationalize if there is any attempt at all to operationalize them. A study on the effectiveness of the Association for Computing Machinery’s code of ethics showed that “explicitly instructing developers to consider this ethical code had no discernable difference compared to a control group [and] developers did not alter their established ways of working.” The study concluded with the need to “identify interventions that do influence decision-making.”

However, beyond intervening in the workplace, Munn points to the larger ecosystem from education to the overall workplace environment where ethics is often a footnote. Workplaces like Google often raise ethical questions that make the pursuit of more ethical AI appear moot or like outright pageantry. For example, the leaked memo that attacked Google’s initiative to promote women in engineering dismissed the need for more gender equality in the workplace because women are less biologically suited than men to be programmers. What kind of justice or ethics would be possible in that kind of work environment and work product?

Context matters

The seamless scaling and implementation of AI across diverse geographies is a problem. The impact on the environment or the reproduction of biased systems affects AI ethics and, more broadly, governance. Ethical principles in AI if meaningful, should encounter and address the friction of the social and political context of a place. Translating the ethical value of non-maleficence i.e., “do not harm”, for example, can look dramatically different in public institutions than in a corporate setting or Los Angeles versus Brussels.

To address that, writer and artist Jenny Odell outlines the concept of “placefulness,” which is a “sensitivity and responsibility to the historical (what happened here) and the ecological (who and what lives or lived here).” Placefulness should be considered when ethical principles are operationalized and made meaningful to those who will implement and use them.

AI registers as theatre

The development of AI within the private sector races ahead of often slow and tedious policy-making processes that limit, guide, and provide recourse for harmful or extractive AI. As described by a researcher Corinne Cath, in place of participatory governance measures that have consequences on the work and profit of these corporations, we have a theatre of ethical AI governance. AI registries provide the sense that one is “watching, investigating, challenging, and questioning the use of algorithms” that are toothless, much like AI principles.

In Cath and Jensen’s research on AI governance and municipal registers, they highlight the similarities between security theatre and AI governance, especially with AI registers. Often, at least in the registers created by Helsinki and Amsterdam, “entries in these registers are narrowly scoped to cover a set of largely uncontentious bureaucratic municipal uses of AI.”

The reality is that without addressing political contexts, which might be done by making explicit the history of how the AI system has been made or procured or creating pathways for accountability and recourse to the harms done by these systems, AI registers are inconsequential.

Participation washing

Much like security theatre, there is a visible effort towards governing AI. We have seen this through aspirational ethical frameworks, guidelines, and nods towards transparency, such as developing registers of AI. What remains absent are the actual measures for safeguarding digital rights and making “users” meaningful stakeholders with the means to hold AI providers and services accountable.

The performance of AI governance or participation-washing serves as a discursive narrative tool. At a minimum, it distracts our attention. At worst, it normalizes harmful AI practices while communicating to the public that there are safeguards when there are none. The narrative of the inevitable takeover of AI benefits from these efforts as it allows business to go on as usual.

Substantial governance requires friction, interacting with a place’s social, cultural, political, and economic dynamics. One of the reasons that the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), as I previously mentioned, has been seen as a success is because it is a legal instrument that women’s rights advocates have used to change national policies.

Meaningful governance does need awareness raising and consultation, as seen in making registers or principles. But participation also needs to provide access to justice for people who are subject to AI systems and means to provide feedback or contest systems to serve them and their lives.

Nadia Nadesan
keep up to date
and subscribe
to our newsletter
Subscribe