Friction and AI Governance: Institutional Intermediaries

Opinion
July 3, 2023

In her previous blog post on Friction and AI Governance, our 2023 fellow Nadia Nadesan looked at the city and what it means when technologies encounter the local social, cultural, and political context. In this article, she examines existing governance structures and practices to imagine what types of organizations or actors could facilitate a more participatory AI governance. You can read more about Nadia and learn about our fellowship program here.


 

Institutional Intermediaries as a Path to More Open AI Governance

The transparency and accountability that give us faith in governments and governance processes are currently absent from the often ambiguous and opaque practices around data-driven models such as GPT-4 by Open AI. This opacity goes beyond private entities. Within the public policy realm in Europe and the current work towards the AI Act, the leading bodies around standard setting are two mostly unknown international non-profits: the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC). The structure of their standard-setting process has been largely closed off without mechanisms for civic engagement or civil society’s participation.[1].

We are in the early days of becoming aware of and creating mechanisms to govern and manage AI technologies used in the service of the public. There is an open call from various civil society bodies to include civil society and human rights defenders in these different governance processes. Technical expertise and good intentions do not mitigate harm, assign responsibility, or provide recourse. Just as important as it is for accountability systems to exist is their level of accessibility for civic engagement and concrete consequences with pathways for remedy.

Unfortunately, when it comes to answering the question of what AI governance might look like, the primary references are speculative or are developed by private corporations. And so, instead of investigating these examples, this blog post examines social practices outside the AI realm. Namely, it draws on the international women’s rights movement to explore how other fields can inform the possibilities of AI governance. I choose the example of CEDAW because it is one of few cases where transnational governance spaces and structures have an impact on and consider national advocacy and grassroots movements.

CEDAW as a Living Process

CEDAW is a treaty the United Nations adopted in 1979. It went into effect in 1981 and is one of the UN’s most successful legal instruments, with 186 out of 193 countries having ratified the convention. However, more than just a legal text used as a reference, it also includes an ongoing governance process where a monitoring committee evaluates countries every four years to oversee the treaty’s implementation. Through the years, it has become a robust treaty body due to the attention it gained from nations and non-governmental organizations.

The CEDAW process is a reporting process where each country submits a country report and receives feedback from the monitoring Committee. To write a CEDAW report and a shadow report (a report done by non-governmental entities if there are gaps in the official report), a wide range of actors must work together, from the UN representatives who read it to local women’s rights organizations that can provide concrete evidence of CEDAW’s implementation.

Today, to make this happen, various organizations work across scales, from national to transnational, to make effective CEDAW reporting possible. Beyond reporting, local organizations can then use the feedback from CEDAW to lobby for more robust protections and accountability for women’s rights. Notable among these organizations is the International Women’s Rights Watch (IWRAW), founded in 1986 and the first NGO to address the CEDAW governance process directly. IWRAW’s work, in the beginning, consisted of raising awareness of the CEDAW process among women’s rights organizations and clarifying the language, formalities, and procedures to make it more easily accessible. Their work, which started as a newsletter, currently spans capacity building, facilitation, and legal guidance, to supporting NGOs in writing reports. IWRAW’s contribution to the CEDAW governance process created more substantial participation from member states by providing resources and support to local women’s rights organizations and making CEDAW accessible.

Intermediary Institutions as Actors in Collective Governance

Initially, after the ratification, CEDAW was not taken seriously, nor did it have the same clear structure and voice it has today. However, over the years, it has become an effective governance mechanism.

This process began once more organizations, countries, and local and national actors’ participation made it meaningful as a form of advocacy for women’s rights. One of the key organizations in making that possible was IWRAW, which played the role of an intermediary institution supporting local organizations participating in CEDAW by guiding them through the reporting writing process, capacity building, booking flights, and even accompanying participants to Geneva, where the CEDAW process takes place.

Often in multilateral spaces, conversations appear at too high a level to account for or consider the nuance of national or local contexts. CEDAW intermediary institutions, such as IWRAW, have bridged the gap between supporting networks and raising awareness that enables local contexts to participate and give integrity to this governance process.

Over the years, IWRAW has had an interdisciplinary team that can facilitate, capacity build, navigate legal jargon, and holds space for intercultural exchange and understanding. Using them as a case study provides a rich resource of transnational experiences and spans over three decades. Their work created the possibility of friction and grounding the multilateral, global process into the local, making CEDAW significant to local contexts and national governments.

Intermediary institutions such as IWRAW can help build trust in governance systems by increasing public awareness of the governance structures in place and, more importantly, by establishing clear access points for civil society. In other words, they generate friction, but not in the sense of creating meaningless obstacles, but instead of creating a space for civil society to challenge and construct spaces for participatory governance meaningfully. Intermediary institutions, such as IWRAW, foster more democratic forms of participation – or friction – by curving out access points and more plural interaction modes between local actors and transnational governance.

Healthy governance is not just about having the presence of civil society but having an ecosystem of networks and actors that create spaces and bridge gaps to enable more plural participation. Women’s rights activists engaged in their local and national contexts shouldn’t have to be experts in the United Nations when their involvement in CEDAW is once every four years, and nor should their lack of expertise in the UN exclude them from contributing their experience and knowledge to advocate for their rights in space that could support their local work. In the case I explore in this blog, IWRAW comes in to bridge this gap.

Similarly, with AI governance, advocates of human rights and the many other actors and experts ranging from open technology activists to IP experts should be able to participate in AI governance without becoming AI experts. They should be supported by a network that facilitates the contribution of their knowledge and perspectives.

Looking Beyond the Tech Industry

For meaningful participation in AI governance – including the work on and the implementation of the AI Act – there must be a building towards an ecosystem or networks that can support the participation of a broader spectrum of actors and organizations. Reporting done by Eticas shows that only a fraction of European funding around technology helps instigate civil society presence and participation in national or transnational governance processes. Budget allocation is mostly top-down and devoted to research and development with no foresight; how those most impacted, the ‘users,’ might also be a necessary part of governing this digital ecosystem.

A report published by the Ada Lovelace Institute in March this year posed the question, ‘How could a governance framework be designed to effectively project fundamental rights and safeguard public interest from conflicting corporate interests?’ The answer might be to look outside the narrow purview of tech governance and reference rights-based governance frameworks, where the priority is fundamental rights and local and national actors and organizations are active participants. When looking outside of the tech industry, there are cases such as CEDAW where the possibility of accountability exists at multilateral, transnational levels of governance.

Technology is irrelevant without people or ‘users.’ However, the AI Act governance process indicates otherwise. Governance processes should consider that implementing AI will differ depending on local geographies, societies, and cultures and will require a wide range of actors and communities to make it successful. Therefore, those actors and communities should have the opportunity to participate and the resources that allow them accessibility and the ability to meaningfully shape the infrastructure of their everyday lives.

Nadia Nadesan

Footnotes

  1. The work done by the Ada Lovelace Institute is a detailed resource that highlights the current ambiguity, complexity, and dangerous lack of definition around the governance process, stakeholders, and language in developing the AI Act.^
keep up to date
and subscribe
to our newsletter
Subscribe