Policy Report | 
September 2023

International AI institutions

A literature review of models, examples, and proposals

Matthijs Maas, José Jaime Villalobos

Executive summary

This literature review examines a range of institutional models that have been proposed for the international governance of artificial intelligence (AI). The review specifically focuses on proposals that would involve creating new international institutions for AI. As such, it focuses on seven models for international AI institutions with distinct functions. 

Part I consists of the literature review. For each model, we provide (a) a description of each model’s functions and types, (b) the most common examples of the model, (c) some underexplored examples that are not (often) mentioned in the AI governance literature but that show promise, (d) a review of proposals for applying that model to the international regulation of AI, and (e) critiques of the model both generally and in its potential application to AI. 

Part II briefly discusses some considerations for further research concerning the design of international institutions for AI, including the effectiveness of each model at accomplishing its aims, treaty-based regulatory frameworks, other institutional models not covered in this review, the compatibility of institutional functions, and institutional options to host a new international AI governance body.

Overall, the review covers seven models, as well as thirty-five common examples of those models, twenty-four additional examples, and forty-nine proposals of new AI institutions based on those models. Table 1 summarizes these findings.1

Introduction

Recent and ongoing progress in artificial intelligence (AI) technology has highlighted that AI systems will have increasingly significant global impacts. In response, the past year has seen intense attention to the question of how to regulate these technologies, both at domestic and international levels. As part of this process, there have been renewed calls for establishing new international institutions to carry out much-needed governance functions and anchor international collaboration on managing the risks as well as realizing the benefits of this technology.

This literature review examines and categorizes a wide range of institutions that have been proposed to carry out the international governance of AI.2 Before reviewing these models, however, it is important to situate proposals to establish a new international institution on AI within the broader landscape of approaches to the global governance of AI. Not all approaches to AI governance focus on creating new institutions. Rather, the institutional approach is only one of several different approaches to international AI governance—each of them concentrating on different governance challenges posed by AI, and each of them providing different solutions.3 These approaches include: 

(1) Rely on unilateral extraterritorial regulation. The extraterritorial approach foregoes (or at least does not prioritize) the multilateral pursuit of international regimes, norms, or institutions. Rather, it aims to enact effective domestic regulations on AI developments and then rely on the direct or extraterritorial effects of such regulations to affect the conditions or standards for AI governance in other jurisdictions. As such, this approach includes proposals to first regulate AI within (key) countries, whether by existing laws,4 through new laws or standards developed by existing institutions, or through new domestic institutions (such as a US “AI Control Council”5 or a National Algorithms Safety Board6). These national policy levers7 can unilaterally affect the global approach to AI, either directly—for instance, through the effect of export controls on chokepoints in the AI chip supply chains8—or because of the way such regulations can spill over to other jurisdictions, as seen in discussions of a “Brussels Effect,” a “California Effect,” or even a “Beijing Effect.”9

(2) Apply existing international institutions, regimes, or norms to AI. The norm-application-focused approach argues that because much of international law establishes broad, technology-neutral principles and obligations, and many domains are already subject to a wide set of overlapping institutional activities, AI technology is in fact already adequately regulated in international law.10 As such, AI governance does not need new institutions or novel institutional models; rather, the aim is to reassert, reapply, extend, and clarify long-existing international institutions and norms. This is one approach that has been taken (with greater and lesser success) to address the legal gaps initially created by some past technologies, such as submarine warfare,11 cyberwar,12 or data flows within the digital economy,13 amongst others. This also corresponds to the approach taken by many international legal scholars, who argue that states should simply recognize that AI is already covered and regulated by existing norms and doctrines in international law, such as the principles of International Human Rights Law,14 International Humanitarian Law, International Criminal Law,15 the doctrine of state responsibility,16 or other regimes.17

(3) Adapt existing international institutions or norms to AI. This approach concedes that AI technology is not yet adequately or clearly governed under international law but holds that existing international institutions could still be adapted to take on this role and may already be doing so. This approach includes proposals that center on mapping, supporting, and extending the existing AI-focused activities of existing international regimes and institutions such as the IMO, ICAO, ITU,18 various UN agencies,19 or other international organizations.20 Others explore proposals for refitting existing institutions, such as expanding the G20 with a Coordinating Committee for the Governance of Artificial Intelligence21 or changing the mandate or composition of UNESCO’s International Research Centre of Artificial Intelligence (ICRAI) or the International Electrotechnical Commission (IEC),22 to take up a stronger role in AI governance. Finally, others explore how either states (through Explanatory Memoranda or treaty reservations) or treaty bodies (through Working Party Resolutions) could adapt existing treaty regimes to more clearly cover AI systems.23 The emphasis here is on a “decentralized but coordinated” approach that supports institutions to adapt to AI,24 rather than necessarily aiming to establish new institutions in an already-crowded existing international “regime complex.”25 

(4) Create new international institutions to regulate AI based on the model of past or existing institutions. The institution-re-creating approach argues that AI technology does need new, distinct international institutions to be adequately governed. However, in developing designs or making the case for such institutions, this approach often points to the precedent of past or existing international institutions and regimes that have a similar model. 

(5) Create entirely novel international institutional models to regulate AI. This approach argues not only that AI technology needs new international institutions, but also that past or existing international institutions (mostly) do not provide adequate models to narrowly follow or mimic.26 This is potentially reflected in some especially ambitious proposals for comprehensive global AI regimes or in suggestions to introduce entirely new mechanisms (e.g., “regulatory markets”27) to governance.

In this review, we specifically focus on proposals for international AI governance and regulation that involve creating new international institutions for AI. That is to say, our main focus is on approach 4 and, to a lesser extent, approach 5. 

We focus on new institutions because they might be better positioned to respond to the novelty, stakes, and technical features of advanced AI systems.28 Indeed, the current climate of global attention on AI seems potentially more supportive of establishing new landmark institutions for AI than has been the case in past years. As AI capabilities progress at an unexpected rate, multiple government representatives and entities29 as well as international organizations30 have recently stated their support towards a new international AI governance institution. Additionally, the idea of establishing such institutions has taken root among many of the leading actors in the AI industry.31 

With this, our review comes with two caveats. In the first place, our focus on this institutional approach above others does not mean that pursuing the creation of new institutions is necessarily an easy strategy or more feasible than the other approaches listed above. Indeed, proposals for new treaty regimes or international institutions for AI—especially when they draw analogies with organizations that were set up decades ago—may often underestimate how much the ground of global governance has changed in recent years. As such, they do not always reckon fully with the strong trends and forces in global governance which, for better or worse, have come to frequently push states towards relying on extending existing norms (approach 2) or adapting existing institutions (approach 3)32 rather than creating novel institutions. Likewise, there are further trends shifting US policy towards pursuing international cooperation through nonbinding international agreements rather than treaties33 as well as concerns that by some trends, international organizations may be taking up a less central role in international relations today than they have in the past.34 All of these trends should temper, or at least inform, proposals to establish new institutions. 

Furthermore, even if one is determined to pursue establishing a new international institution along one of the models discussed here, many key open questions remain about the optimal route to design and establish that organization, including (a) Given that many institutional functions might be required to adequately govern advanced AI systems, might there be a need for “hybrid” or combined institutions with a dual mandate, like the IAEA?35 (b) Should an institution be tightly centralized or could it be relatively decentralized, with one or more new institutions orchestrating the AI policy activities of a constellation of many other (existing or new) organizations?36 (c) Should such an organization be established formally, or are informal club approaches adequate in the first instance?37 (d) Should voting rules within such institutions work on the grounds of consensus or simple majority? (e) What rules should govern adapting or updating the institution’s mission and mandate to track ongoing developments in AI? This review will briefly flag and discuss some of these questions in Part II but will leave many of them open for future research.

Regarding terminology, we will use both “international institution” and “international organization” interchangeably and broadly to refer to any of (a) formally established formal intergovernmental organizations (FIGOs) founded through a constituent document (e.g., WTO, WHO); (b) treaty bodies or secretariats that have a more limited mandate, primarily supporting the implementation of a treaty or regime (e.g., BWC Implementation Support Unit); and (c) ”informal IGOs” (IIGOs) that consist of loose “task groups” and coalitions of states (e.g., the G7, BRICS, G20).38 We use “model” to refer to the general cluster of institutions under discussion; we use “function” to refer to a given institutional model’s purpose or role. We use “AI proposals” to refer to the precise institutional models that are proposed for international AI governance.

I. Review of institutional models 

Below, we review a range of institutional models that have been proposed for AI governance. For each model, we discuss its general functions, different variations or forms of the model, a range of examples that are frequently invoked, and explicit AI governance proposals that follow the model. In addition, we will highlight additional examples that have not received much attention but that we believe could be promising. Finally, where applicable, we will highlight existing critiques of a given model.

Model 1: Scientific consensus-building

1.1 Functions and types: The functions of the scientific consensus-building institutional model are to (a) increase general policymaker and public awareness of an issue, and especially to (b) establish a scientific consensus on an issue. The aim of this is to facilitate greater common knowledge or shared perception of an issue amongst states, with the aim to motivate national action or enable international agreements. Overall, the goal of institutions following this model is not to establish an international consensus on how to respond or to hand down regulatory recommendations directly, but simply to provide a basic knowledge base to underpin the decisions of key actors. By design, these institutions are, or aim to be, non-political—as in the IPCC’s mantra to be “policy-relevant and yet policy-neutral, never policy-prescriptive.”39

1.2 Common examples: Commonly cited examples of scientific consensus-building institutions include most notably the Intergovernmental Panel on Climate Change (IPCC),40 the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES),41 and the Scientific Assessment Panel (SAP) of the United Nations Environment Programme (UNEP).42 

1.3 Underexplored examples: An example that has not yet been invoked in the literature but that could be promising to explore is the Antarctic Treaty’s Committee for Environmental Protection (CEP), which provides expert advice to the Antarctic Treaty Consultative Meetings and which combines scientific consensus-building models with risk-management functions, supporting the Protocol on Environmental Protection to the Antarctic Treaty.43 Another example could be the World Meteorological Organization (WMO), which monitors weather and climatic trends and makes information available.

1.4 Proposed AI institutions along this model: There have been a range of proposals for scientific consensus-building institutions for AI. Indeed, in 2018 the precursor initiative to what would become the Global Partnership on AI (GPAI) was initially envisaged by France and Canada as an Intergovernmental Panel on AI (IPAI) along the IPCC model.44 This proposal was supported by many researchers: Kemp and others suggest an IPAI that could measure, track, and forecast progress in AI, as well as its use and impacts, to “provide a legitimate, authoritative voice on the state and trends of AI technologies.”45 They argue that an IPAI could perform structural assessments every three years as well as take up quick-response special-issue assessments. In a contemporaneous paper, Mialhe proposes an IPAI model as an institution that would gather a large and global group of experts “to inform dialogue, coordination, and pave the way for efficient global governance of AI.”46

More recently, Ho and others propose an intergovernmental Commission on Frontier AI to “establish a scientific position on opportunities and risks from advanced AI and how they may be managed,” to help increase public awareness and understanding, to “contribute to a scientifically informed account of AI use and risk mitigation [and to] be a source of expertise for policymakers.”47 Bremmer and Suleyman propose a global scientific body to objectively advise governments and international bodies on questions as basic as what AI is and what kinds of policy challenges it poses.48 They draw a direct link to the IPCC model, noting that “this body would have a global imprimatur and scientific (and geopolitical) independence […] [a]nd its reports could inform multilateral and multistakeholder negotiations on AI.”49 Bak-Coleman and others argue in favor of an Intergovernmental Panel on Information Technology, an independent, IPCC-like panel charged with studying the “impact of emerging information technologies on the world’s social, economic, political and natural systems.”50 In their view, this panel would focus on many “computational systems,” including “search engines, online banking, social-media platforms and large language models” and would have leverage to persuade companies to share key data.51 

Finally, Mulgan and others, in a 2023 paper, propose a Global AI Observatory (GAIO) as an institution that “would provide the necessary facts and analysis to support decision-making [and] would synthesize the science and evidence needed to support a diversity of governance responses.”52 Again drawing a direct comparison to the IPCC, they anticipate that such a body could set the foundation for more serious regulation of AI through six activities: (a) a global standardized incident reporting database, (b) a registry of crucial AI systems, (c) a shared body of data and analysis of the key facts of the AI ecosystem, (d) working groups exploring global knowledge about the impacts of AI on critical areas, (e) the ability to offer legislative assistance and model laws, and (f) the ability to orchestrate global debate through an annual report on the state of AI.53 They have since incorporated this proposal within a larger “Framework for the International Governance of AI” by the Carnegie Council for Ethics in International Affairs’s Artificial Intelligence & Equality Initiative, alongside other components such as a neutral technical organization to analyze “which legal frameworks, best practices, and standards have risen to the highest level of global acceptance.”54

1.5 Critiques of this model: One concern that has been expressed is that AI governance is currently too institutionally immature to support an IPCC-like model, since, as Roberts argues, “the IPCC […] was preceded by almost two decades of multilateral scientific assessments, before being formalised.”55 He considers that this may be a particular problem for replicating that model for AI, given that some AI risks are currently still subject to significantly less scientific consensus.56 Separately, Bak-Coleman and others argue that a scientific consensus-building organization for digital technologies would face a far more difficult research environment than the IPCC and IPBES because, as opposed to the rich data and scientifically well-understood mechanisms that characterize climate change and ecosystem degradation, research into the impacts of digital technologies often faces data access restrictions.57 Ho and others argue that a Commission on Frontier AI would face more general scientific challenges in adequately studying future risks “on the horizon,” as well as potential politicization, both of which might inhibit the ability of such a body to effectively build consensus.58 Indeed, it is possible that in the absence of decisive and incontrovertible evidence about the trajectory and risks of AI, a scientific consensus-building institution would likely struggle to deliver on its core mission and might instead spark significant scientific contestation and disagreement amongst AI researchers.

Model 2: Political consensus-building and norm-setting 

2.1 Functions and types: The function of political consensus-building and norm-setting institutions is to help states come to greater political agreement and convergence about the way to respond to a (usually) clearly identified and (ideally) agreed issue or phenomenon. These institutions’ aim is to reach the required political consensus necessary to either align national policymaking responses sufficiently well, achieving some level of harmonization that reduces trade restrictions or impedes progress towards addressing the issue, or to help begin negotiations on other institutions that establish more stringent regimes. Political consensus-building institutions do this by providing fora for discussion and debate that can aid the articulation of potential compromises between state interests and by exerting normative pressure on states towards certain goals. In a norm-setting capacity, institutions can also draw on (growing) political consensus to set and share informal norms, even if formal institutions have not yet been created. For instance, if negotiations for a regulatory or control institution are held up, slowed, or fail, political consensus-building institutions can also play a norm-setting function by establishing, as soft law, informal standards for behavior. While such norms are not as strictly specified or as enforceable as hard-law regulations are, they can still carry force and see take-up.

2.2 Common examples: There are a range of examples of political consensus-building institutions. Some of these are broad, such as conferences of parties to a treaty (also known as COPs, the most popular one being that of the United Nations Framework Convention on Climate Change [UNFCCC]).59 Many others, however, such as the Organization for Economic Co-operation and Development (OECD), the G20, and the G7, reflect smaller, at times more informal, governance “clubs,” which can often move ahead towards policy-setting more quickly because their membership is already somewhat aligned60 and because many of them have already begun to undertake activities or incorporate institutional units focused on AI developments.61

Gutierrez and others have reviewed a range of historical cases of (domestic and global) soft-law governance that they argue could provide lessons for AI. These include a range of institutional activities, such as UNESCO’s 1997 Universal Declaration on the Human Genome and Human Rights, 2003 International Declaration on Human Genetic Data, and 2005 Universal Declaration on Bioethics and Human Rights,62 the Environmental Management System (ISO 14001), the Sustainable Forestry Practices by the Sustainable Forestry Initiative and Forest Stewardship Council, and the Leadership in Energy and Environmental Design initiative.63 Others, however, such as the Internet Corporation for Assigned Names and Numbers (ICANN), the Asilomar rDNA Guidelines, the International Gene Synthesis Consortium, the International Society for Stem Cell Research Guidelines, the BASF Code of Conduct, the Environmental Defense Fund, and the DuPont Risk Frameworks, offer greater examples of success.64 Turner likewise argues that the ICANN, which manages to develop productive internet policy, offers a model for international AI governance.65 Elsewhere, Harding argues that the 1967 Outer Space Treaty offers a telling case of a treaty regime that quickly crystallized state expectations and policies around safe innovation in a then-novel area of science.66 Finally, Feijóo and others suggest that “new technology diplomacy” on AI could involve a series of meetings or global conferences on AI, which could draw lessons from experiences such as the World Summits on the Information Society (WSIS).67

2.3 Underexplored examples: Examples of norm-setting institutions that formulate and share relevant soft-law guidelines on technology include the International Organization for Standardization (ISO), International Electrotechnical Commission (IEC), the International Telecommunication Union (ITU), and the United Nations Commission on International Trade Law (UNCITRAL)’s Working Group on Electronic Commerce.68 Another good example of a political consensus-building and norm-setting initiative could be found in the 1998 Lysøen Declaration,69 an initiative by Canada and Norway that expanded to 11 highly committed states along with several NGOs and which kicked off a “Human Security Network” that achieved a significant and outsized global impact, including the Ottawa Treaty ban on antipersonnel mines, the Rome Treaty establishing the International Criminal Court, the Kimberley Process aimed at inhibiting the flow of conflict diamonds, and landmark Security Council resolutions on Children and Armed Conflict and Women, Peace and Security. Another norm-setting institution that is not yet often invoked in AI discussions but that could be promising to explore is the Codex Alimentarius Commission (CAC), which develops and maintains the Food and Agriculture Organization (FAO)’s Codex Alimentarius, a collection of non-enforceable but internationally recognized standards and codes of practice about various aspects of food production, food labeling, and safety. Another example of a “club” under this model which is not often mentioned but that could be influential is the BRICS group, which recently expanded from 5 to 11 members.

2.4 Proposed AI institutions along this model: Many proposals for political consensus-building institutions on AI tend to not focus on establishing new institutions, arguing instead that it is best to put AI issues on the agenda of existing (established and recognized) consensus-building institutions (e.g., the G20) or of existing norm-setting institutions (e.g., ISO). Indeed, even recent proposals for new international institutions still emphasize that these should link up well with already-ongoing initiatives, such as the G7 Hiroshima Process on AI.70 

However, there have been proposals for new political consensus-building institutions. Erdélyi and Goldsmith propose an International AI Organisation (IAIO), “to serve as an international forum for discussion and engage in standard setting activities.”71 They argue that “at least initially, the global AI governance framework should display a relatively low level of institutional formality and use soft-law instruments to support national policymakers in the design of AI policies.”72 Moreover, they emphasize that the IAIO “should be hosted by a neutral country to provide for a safe environment, limit avenues for political conflict, and build a climate of mutual tolerance and appreciation.”73 More recently, the US’s National Security Commission on Artificial Intelligence’s final report includes a proposal for an Emerging Technology Coalition, “to promote the design, development, and use of emerging technologies according to democratic norms and values; coordinate policies and investments to counter the malign use of these technologies by authoritarian regimes; and provide concrete, competitive alternatives to counter the adoption of digital infrastructure made in China.”74 Recently, Marcus and Reuel propose the creation of an International Agency for AI (IAAI) tasked with convening experts and developing tools to help find “governance and technical solutions to promote safe, secure and peaceful AI technologies.”75 

At the looser organizational end, Feijóo and others propose a new technology diplomacy initiative as “a renewed kind of international engagement aimed at transcending narrow national interests and seeks to shape a global set of principles.” In their view, such a framework could “lead to an international constitutional charter for AI.”76 Finally, Jernite and others propose a multi-party international Data Governance Structure, a multi-party, distributed governance arrangement for improving the global systematic and transparent management of language data at a global level, and which includes a Data Stewardship Organization in order to develop “appropriate management plans, access restrictions, and legal scholarship.”77 Other proposed organizations are also more focused on supporting states in implementing AI policy, such as through training. For instance, Turner proposes creating an International Academy for AI Law and Regulation.78

2.5 Critiques of this model: There have not generally been many in-depth critiques of proposals for new political consensus-building or norm-setting institutions. However, some concerns focus on the difficult tradeoffs that consensus-building institutions face in deciding whether to prioritize breadth of membership and inclusion or depth of mission alignment. Institutions that aim to foster consensus across a very broad swath of actors may be very slow to reach such normative consensus, and even when they do, they may only achieve a “lowest-common-denominator” agreement.79 On the other hand, others counter that AI consensus-building institutions or fora will need to be sufficiently inclusive—in particular, and possibly controversially, with regard to China80—if they do not want to run the risk of producing a fractured and ineffective regime, or even see negotiations implode over the political question of who was invited or excluded.81 Finally, a more foundational challenge to political consensus-building institutions is that while it may result in (the appearance of) joint narratives, this may not have much teeth if the agreement is not binding.82

Model 3: Coordination of policy and regulation

3.1 Functions and types: The functions of this institutional model are to help align and coordinate policies, standards, or norms83 in order to ensure a coherent international approach to a common problem. There is significant internal variation in the setup of institutions under this model, with various subsidiary functions. For instance, such institutions may (a) directly regulate the deployment of a technology in relative detail, requiring states to comply with and implement those regulations at the national level; (b) assist states in the national implementation of agreed AI policies; (c) focus on the harmonization and coordination of policies; (d) focus on the certification of industries or jurisdictions to ensure they comply with certain standards; or (e) in some cases, take up functions related to monitoring and enforcing norm compliance.

3.2 Common examples: Common examples of policy-setting institutions include the World Trade Organization (WTO) as an exemplar of an empowered, centralized regulatory institution.84 Other examples given of regulatory institutions include the International Civil Aviation Organization (ICAO), the International Maritime Organization (IMO), the International Atomic Energy Agency (IAEA), and the Financial Action Task Force (FATF).85 Examples of policy-coordinating institutions may include the United Nations Environment Programme (UNEP), which synchronized international agreements on the environment and facilitated new agreements, including the 1985 Vienna Convention for the Protection of the Ozone Layer.86 Nemitz points to the example of the institutions created under the United Nations Convention on the Law of the Sea (UNCLOS) as a model for an AI regime, including an international court to enforce the proposed treaty.87 Finally, Sepasspour proposes the establishment of an “AI Ethics and Safety Unit” within the existing International Electrotechnical Commission (IEC), under a model that is “inspired by the Food and Agriculture Organization’s (FAO) Food Safety and Quality Unit and Emergency Prevention System for Food Safety early warning system.”88

3.3 Underexplored examples: Examples that are not yet often discussed but that could be useful or insightful include the European Monitoring and Evaluation Programme (EMEP), which implements the 1983 Convention on Long-Range Transboundary Air Pollution—a regime that has proven particularly adaptive.89 A more sui generis example is that of international financial institutions, like the World Bank or the International Monetary Fund (IMF), which tend to shape domestic policy indirectly through conditional access to loans or development fund

3.4 Proposed AI institutions along this model: Specific to advanced AI, recent proposals for regulatory institutions include Ho and other’s Advanced AI Governance Organisation, which “could help internationalize and align efforts to address global risks from advanced AI systems by setting governance norms and standards, and assisting in their implementation.”90 

Trager and others propose an International AI Organization (IAIO) to certify jurisdictions’ compliance with international oversight standards. These would be enforced through a system of conditional market access in which trade barriers would be imposed on jurisdictions which are not certified or whose supply chains integrate AI from non-IAIO certified jurisdictions. Among other advantages, the authors suggest that this system could be less vulnerable to proliferation of industry secrets by having states establish their own domestic regulatory entities rather than having international jurisdictional monitoring (as is the case with the IAEA). However, the authors also propose that the IAIO could provide monitoring services to governments that have not yet built their own monitoring capabilities. The authors argue that their model has several advantages over others, including agile standard-setting, monitoring, and enforcement.91 

In a regional context, Stix proposes an EU AI Agency which, among other roles, could be an analyzer of gaps in AI policy and a developer of policies that could fill such gaps. For this agency to be effective, Stix suggests it should be independent from political agendas by, for instance, having a mandate that does not coincide with election cycles.92 Webb proposes a “Global Alliance on Intelligence Augmentation” (GAIA), which would bring together experts from different fields to set best practices for AI.93 

Chowdhury proposes a generative AI global governance body as a “consolidated ongoing effort with expert advisory and collaborations [which] should receive advisory input and guidance from industry, but have the capacity to make independent binding decisions that companies must comply with.”94 In her analysis, this body should be funded via unrestricted and unconditional funds by all AI companies engaged in the creation or use of generative AI and it should “cover all aspects of generative AI models, including their development, deployment, and use as it relates to the public good. It should build upon tangible recommendations from civil society and academic organizations, and have the authority to enforce its decisions, including the power to require changes in the design or use of generative AI models, or even halt their use altogether if necessary.”95

A proposal for a policy-coordinating institution is Kemp and others’ Coordinator and Catalyser of International AI Law, which would be “a coordinator for existing efforts to govern AI and catalyze multilateral treaties and arrangements for neglected issues.”96

3.5 Critiques of this model: Castel and Castel critique international conventions on the grounds that they “are difficult to monitor and control.”97 More specifically, Ho and others argue that a model like an Advanced AI Governance Organization would face challenges around its ability to set and update standards sufficiently quickly, around incentivizing state participation in adopting the regulations, and in sufficiently scoping the challenges to focus on.98 Finally, reviewing general patterns in current state activities on AI standard-setting, von Ingersleben notes that “technical experts hailing from geopolitical rivals, such as the United States and China, readily collaborate on technical AI standards within transnational standard-setting organizations, whereas governments are much less willing to collaborate on global ethical AI standards within international organizations,”99 which suggests potential thresholds to overcoming state disinterest in participating in any international institutions focused on more political and ethical standard-setting.

Model 4: Enforcement of standards or restrictions

4.1 Functions and types: The function of this institutional model is to prevent the production, proliferation, or irresponsible deployment of a dangerous or illegal technology, product, or activity. To fulfill that function, institutions under this model rely on, among other mechanisms, (a) bans and moratoria, (b) nonproliferation regimes, (c) export-control lists, (d) monitoring and verification mechanisms,100 (e) licensing regimes, and (f) registering and/or tracking of key resources, materials, or stocks. Other types of mechanisms, such as (g) confidence-building measures (CBMs), are generally transparency-enabling.101 While generally focused on managing tensions and preventing escalations,102 CBMs can also build trust amongst states in each other’s mutual compliance with standards or prohibitions, and can therefore also support or underwrite standard- and restriction-enforcing institutions.

4.2 Common examples: The most prominent example of this model, especially in discussions of institutions capable of carrying out monitoring and verification roles, is the International Atomic Energy Agency (IAEA)103—in particular, its Department of Safeguards. Many other proposals refer to the monitoring and verification mechanisms of arms control treaties.104 For instance, Baker has studied the monitoring and verification mechanisms for different types of nuclear arms control regimes, reviewing the role of the IAEA system under Comprehensive Safeguards Agreements with Additional Protocols in monitoring nonproliferation treaties such as the Non-Proliferation Treaty (NPT) and the five Nuclear-Weapon-Free-Zone Treaties, the role of monitoring and verification arrangements in monitoring bilateral nuclear arms control limitation treaties, and the role of the International Monitoring System (IMS) in monitoring and enforcing (prospective) nuclear test bans under the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO).105 Shavit likewise refers to the precedent of the NPT and IAEA in discussing a resource (compute) monitoring framework for AI.106 

Examples given of export-control regimes include the Nuclear Suppliers Group, the Wassenaar Arrangement, and the Missile Technology Control Regime.107 As examples of CBMs, people have pointed to the Open Skies Treaty,108 which is enforced by the Open Skies Consultative Commission (OSCE). 

There are also examples of global technology control institutions that were not carried through but which are still discussed as inspirations for AI, such as the international Atomic Development Authority (ADA) proposed in the early nuclear age109 or early- to mid-20th-century proposals for the global regulation of military aviation.110

4.3 Underexplored examples: Examples that are not yet often discussed in the context of AI but that could be promising are the Organisation for the Prohibition of Chemical Weapons (OPCW),111 the Biological Weapons Convention’s Implementation Support Unit, the International Maritime Organization (in its ship registration function), and the Convention on International Trade in Endangered Species of Wild Fauna and Flora’s (CITES) Secretariat, specifically, its database of national import and export reports.

4.4 Proposed AI institutions along this model: Proposals along this model are particularly widespread and prevalent. Indeed, as mentioned, a significant part of the literature on the international governance of AI has made reference to some sort of “IAEA for AI.” For instance, in relatively early proposals,112 Turchin and others propose a “UN-backed AI-control agency” which “would require much tighter and swifter control mechanisms, and would be functionally equivalent to a world government designed specifically to contain AI.”113 Ramamoorthy and Yampolskiy propose a “global watchdog agency” that would have the express purpose of tracking AGI programs and that would have the jurisdiction and the lawful authority to intercept and halt unlawful attempts at AGI development.114 Pointing to the precedent of both the IAEA and its inspection regime, and the Comprehensive Nuclear Test-Ban Treaty Organization (CTBTO)’s Preparatory Commission, Nindler proposes an International Enforcement Agency for safe AI research and development, which would support and implement the provisions of an international treaty on safe AI research and development, with the general mission “to accelerate and enlarge the contribution of artificial intelligence to peace, health and prosperity throughout the world [and … to ensure that its assistance] is not used in such a way as to further any military purpose.”115 Such a body would be charged with drafting safety protocols and measures, and he suggests that its enforcement could, in extreme cases, be backed up by the use of force under the UN Security Council’s Chapter VII powers.116 

Whitfield draws on the example of the UN Framework Convention on Climate Change to propose a UN Framework Convention on AI (UNFCAI) along with a Protocol on AI that would subsequently deliver the first set of enforceable AI regulations. He proposes that these should be supported by three new bodies: an AI Global Authority (AIGA) to provide an inspection regime in particular for military AI, an associated “Parliamentary Assembly” supervisory body that would enhance democratic input into the treaty’s operations and play “a constructive monitoring role,” as well as a multistakeholder Intergovernmental Panel on AI to provide scientific, technical, and policy advice to the UNFCAI.117

More recently,118 Ho and others propose an “Advanced AI Governance Organization” which, in addition to setting international standards for the development of advanced AI (as discussed above), could monitor compliance with these standards through, for example, self-reporting, monitoring practices within jurisdictions, or detection and inspection of large data centers.119 Altman and others propose an AIEA for Superintelligence” consisting of “an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security.”120 In a very similar vein, Guest (based on an earlier proposal by Karnofsky)121 calls for an “International Agency for Artificial Intelligence (IAIA)” to conduct “extensive verification through on-chip mechanisms [and] on-site inspections” as part of his proposal for a “Collaborative Handling of Artificial Intelligence Risks with Training Standards (CHARTS).”122 Drawing together elements from several models—and referring to the examples of the IPCC, Interpol, and the WTO’s dispute settlement system—Gutierrez proposes a “multilateral AI governance initiative” to mitigate “the shared large-scale high-risk harms caused directly or indirectly by AI.”123 His proposal envisions an organizational structure consisting of (a) a forum for member state representation (which adopts decisions via supermajority); (b) technical bodies, such as an external board of experts, and a permanent technical and liaison secretariat that works from an information and enforcement network and which can issue “red notice” alerts; and (c) an arbitration board that can hear complaints by non-state AI developers who seek to contest these notices as well as by member states.124

In a 2013 paper, Wilson proposes an “Emerging Technologies Treaty”125 that would address risks from many emerging technologies. In his view, this treaty could either be housed under an existing international organization or body or established separately, and it would establish a body of experts that would determine whether there was a “reasonable grounds for concern” about AI or other dangerous research, after which states would be required to regulate or temporarily prohibit research.126 Likewise drawing on the IAEA model, Chesterman proposes an International Artificial Intelligence Agency (IAIA) as an institution with “a clear and limited normative agenda, with a graduated approach to enforcement,” arguing that “the main ‘red line’ proposed here would be the weaponization of AI—understood narrowly as the development of lethal autonomous weapon systems lacking ‘meaningful human control’ and more broadly as the development of AI systems posing a real risk of being uncontrollable or uncontainable.”127 In practice, this organization would draw up safety standards, develop a forensic capability to identify those responsible for “rogue” AI, serve as a clearinghouse to gather and share information about such systems, and provide early notification of emergencies.128 Chesterman argues that one organizational cause that could be adopted for this IAIA is to learn from the IAEA, where its Board of Governors (rather than the annual General Conference) has ongoing oversight of its operations.

Other authors endorse an institution more directly aimed at preventing or limiting proliferation of dangerous AI systems. Jordan and others propose a “NPT+” model,129 and the Future of Life Institute (FLI) proposes “international agreements to limit particularly high-risk AI proliferation and mitigate the risks of advanced AI.”130 PauseAI proposes an international agreement that sets up an “International AI Safety Agency” that would be in charge of granting approvals for deployments of AI systems and new training runs above a certain size.131 The Elders, a group of independent former world leaders, have recently called on countries to request, via the UN General Assembly, that the International Law Commission draft an international treaty to establish a new “International AI Safety Agency,”132 drawing on the models of the NPT and the IAEA, “to manage these powerful technologies within robust safety protocols [and to …] ensure AI is used in ways consistent with international law and human rights treaties.”133 More specific monitoring provisions are also entertained; for instance, Balwit briefly discusses an advanced AI chips registry, potentially organized by an international agency.134

At the level of transparency-supporting agreements, there are many proposals for confidence-building measures for (military) AI. Such proposals focus on bilateral arrangements that build confidence amongst states and contribute to stability (as under Model 5), but which lack distinct institutions. For instance, Shoker and others discuss an “international code of conduct for state behavior.”135 Scharre, Horowitz, Khan and others discuss a range of other AI CBMs,136 including the marking of autonomous weapons systems, geographic limits, and limits on particular (e.g., nuclear) operations of AI.137 They propose to group these under an International Autonomous Incidents Agreement (IAIA) to “help reduce risks from accidental escalation by autonomous systems, as well as reduce ambiguity about the extent of human intention behind the behavior of autonomous systems.”138 In doing so, they point to the precedent of arrangements such as the 1972 Incidents at Sea Agreement139 as well as the 12th–19th century development of Maritime Prize Law.140 Imbrie and Kania propose an “Open Skies on AI” agreement.141 Bremmer & Suleyman propose a bilateral US-China regime to foster cooperation between the US and Beijing on AI, envisioning this “to create areas of commonality and even guardrails proposed and policed by a third party.”142 

4.5 Critiques of this model: Many critiques of the enforcement model have ended up focusing (whether fairly or not) on the appropriateness of the basic analogy between nuclear weapons and AI that is explicit or implicit in proposals for an IAEA- or NPT-like regime. For instance, Kaushik and Korda have opposed what they see as aspirations to a “wholesale ban” on dangerous AI and argue that “attempting to regulate artificial intelligence indiscriminately would be akin to regulating the concept of nuclear fission itself.”143 

Others critique the appropriateness of an IAEA-modeled approach: Stewart suggests that the focus on the IAEA’s safeguards is inadequate since AI systems cannot be safeguarded in the same way, and he suggests that, rather, better lessons might be found in the IAEA’s International Physical Protection Advisory Service (IPPAS) missions, which allow it to serve as an independent third party to assess the regulatory preparedness of countries that aim to develop nuclear programs.144 Drexel and Depp argue that even if this IAEA model could work on a technical level, it will likely be prohibitively difficult to negotiate such an intense level of oversight.145 Further, Sepasspour as well as Law note that rather than a straightforward setup, there were years of delay between the IAEA’s establishment (1957), its adoption of the INFCIRC 26 safeguards document (1961), its taking of a leading role in nuclear nonproliferation upon the adoption of the NPT (1968), and its eventual further empowerment of its verification function through the Additional Protocol (1997).146 Such a slow aggregation might not be adequate given the speed of advanced AI development. Finally, another issue is that the strength of an IAEA agency depends on the existence of supportive international treaties as well as specific incentives for participation.

Others question whether this model would be desirable, even if achievable. Howard generally critiques many governance proposals that would involve centralized control (whether domestic or global) over the proliferation of and access to frontier AI systems, arguing that such centralization would end up only advantaging currently powerful AI labs as well as malicious actors willing to steal models, with the concern that this would have significant illiberal effects.147

Model 5: Stabilization and emergency response 

5.1 Functions and types: The function of this institutional model is to ensure that an emerging technology or an emergency does not have a negative impact on social stability and international peace.

Such institutions can serve various subsidiary functions, including (a) performing general stability management by assessing and mitigating systemic vulnerabilities that are susceptible to incidents or accidents; (b) providing early warning of—and response coordination to—incidents and emergencies, providing timely warning, and creating common knowledge of an emergency;148 (c) generally stabilizing relations, behavior, and expectations around AI technology to encourage transparency and trust around state activities in a particular domain and to avoid inadvertent military conflict.

5.2 Common examples: Examples of institutions involved in stability management include the Financial Stability Board (FSB), an entity “composed of central bankers, ministries of finance, and supervisory and regulatory authorities from around the world.”149 Another example might be the United Nations Office for Disaster Risk Reduction (UNDRR), which focuses on responses to natural disasters.150 Gutierrez invokes Interpol’s “red notice” alert system as an example of a model by which an international institution could alert global stakeholders about the dangers of a particular AI system.151

5.3 Underexplored examples: Examples that are not yet invoked, but that could be promising examples of early warning functions include WHO’s “public health emergency of international concern” early warning mechanism and the procedure established in the IAEA’s 1986 Convention on Early Notification of a Nuclear Accident.

5.4 Proposed AI institutions along this model: AI proposals along the early warning model include Pauwels’ paper describing a Global Foresight Observatory as a multistakeholder platform aimed at fostering greater cooperation in technological and political preparedness for the impacts of innovation in various fields, including AI.152 Brenner and Suleyman propose a Geotechnology Stability Board which “could work to maintain geopolitical stability amid rapid AI-driven change” based on the coordination of national regulatory authorities and international standard-setting bodies. At other times, such a body would help prevent global technology actors from “engaging in regulatory arbitrage or hiding behind corporate domiciles.” Finally, it could also take up responsibility for governing open-source AI and censoring uploads of highly dangerous models.153 

5.5 Critiques of this model: As there have been relatively limited numbers of proposals for this model, there are not yet many critiques. However, possible critiques might focus on the potential adequacy of relying on international institutions to respond to (rather than prevent) situations where dangerous AI systems have already seen deployment, as coordinating, communicating, and implementing effective countermeasures in those situations might either be very difficult or far too slow to respond adequately to countering a misaligned AI system. 

Model 6: International joint research

6.1 Functions and types: The function of this institutional model is to start a bilateral or multilateral collaboration between states or state entities to solve a common problem or achieve a common goal. Most institutions following this model would focus on accelerating the development of a technology or exploitation of a resource by particular actors in order to avoid races. Others would aim at speeding up the development of safety techniques. 

In some proposals, an institution like this aims not just to rally and organize a major research project, but simultaneously to include elements of an enforcing institution in order to exclude all other actors from conducting research and/or creating capabilities around that problem or goal, creating a de facto or an explicit international monopoly on an activity. 

6.2 Common examples: Examples that are pointed to as models of an international joint scientific program include the European Organization for Nuclear Research (CERN),154 ITER, the International Space Station (ISS), and the Human Genome Project.155 Example models of a (proposed) international monopoly include the Acheson-Lilienthal Proposal156 and the resulting Baruch Plan, which called for the creation of an Atomic Development Authority.157 

6.3 Underexplored examples: Examples that are not yet discussed in the literature but that could be promising are the James Webb Space Telescope and the Laser Interferometer Gravitational-Wave Observatory (LIGO),158 which is organized internationally through the LIGO Scientific Collaboration (LSC).

6.4 Proposed AI institutions along this model: Explicit AI proposals along the joint scientific program model are various.159 Some proposals focus primarily on accelerating safety. Lewis Ho and others suggest an “AI Safety Project” to “promote AI safety R&D by promoting its scale, resourcing and coordination.” To ensure AI systems are reliable and less vulnerable to misuse, this institution would have access to significant compute and engineering capacity as well as to AI models developed by AI companies. Contrary to other international joint scientific programs like CERN or ITER, which are strictly intergovernmental, Ho and others propose that the AI Safety Project comprise other actors as well (e.g., civil society and the industry). The authors also suggest that, to prevent replication of models or diffusion of dangerous technologies, the AI Safety Project should incorporate information and security measures such as siloing information, structuring model access, and designing internal review processes.160 Neufville and Baum point out that “a clearinghouse for research into AI” could solve the collective problem of underinvestment in basic research, AI ethics, and safety research.161 More ambitiously, Ramamoorthy and Yampolskiy propose a “Benevolent AGI Treaty,” which involves “the development of AGI as a global, non-strategic humanitarian objective, under the aegis of a special agency within the United Nations.”162 

Other proposals suggest intergovernmental collaboration for the development of AI systems more generally. Daniel Zhang and others at Stanford University’s HAI recommend that the United States and like-minded allies create a “Multilateral Artificial Intelligence Research Institute (MAIRI)” to facilitate scientific exchanges and promote collaboration on AI research—including the risks, governance, and socio-economic impact of AI—based on a foundational agreement outlining agreed research practices. The authors suggest that MAIRI could also strengthen policy coordination around AI.163 Fischer and Wenger add that a “neutral hub for AI research” should have four functions: (a) fundamental research in the field of AI, (b) research and reflection on societal risks associated with AI, (c) development of norms and best practices regarding the application of AI, and (d) further education for AI researchers. This hub could be created by a conglomerate of like-minded states but should eventually be open to all states and possibly be linked to the United Nations through a cooperation agreement, according to the authors.164 Other authors posit that an international collaboration on AI research and development should include all members of the United Nations from the start, as similar projects like the ISS or the Human Genome Project have done. They suggest that this approach might reduce the possibility of an international conflict.165 In this vein, Kemp and others call for the foundation of a “UN AI Research Organization (UNAIRO),” which would focus on “building AI technologies in the public interest, including to help meet international targets […] [a] secondary goal could be to conduct basic research on improving AI techniques in the safest, careful and responsible environment possible.”166

Philipp Slusallek, Scientific Director of the German Research Center for Artificial Intelligence, suggests a “CERN for AI”—“a collaborative, scientific effort to accelerate and consolidate the development and uptake of AI for the benefit of all humans and our environment.” Slusallek promotes a very open and transparent design for this institution, in which data and knowledge would flow freely between collaborators.167 Similarly, the Large-scale Artificial Intelligence Open Network (LAION) calls for a CERN-like open-source collaboration among the United States and allied countries to establish an international “supercomputing research facility” hosting “a diverse array of machines equipped with at least 100,000 high-performance state-of-the-art accelerators” that can be overseen by democratically elected institutions from participating countries.168 Daniel Dewey goes a step further and suggests a potential joint international AI project with a monopoly over hazardous AI development in the same spirit of the 1946 Baruch Plan, which proposed an international Atomic Development Authority with a monopoly over nuclear activities. However, Dewey admits this proposal is possibly politically intractable.169 In another proposal for monopolized international development, Miotti suggests a “Multilateral AGI Consortium” (MAGIC), which would be an international organization mandated to run “the world’s only advanced and secure AI facility focused on safety-first research and development of advanced AI.”170 This organization would only share breakthroughs with the outside world once proven demonstrably safe and would therefore be coupled with a global moratorium on the creation of AI systems exceeding a set compute-governance threshold.

The proposals for an institution analogous to CERN discussed thus far envision a grand institution that draws talent and resources for research and development of AI projects in general. Other proposals have a narrower focus. Charlotte Stix, for example, suggests that a more decentralized version of this model could be more beneficial. Stix argues that a “European Artificial Intelligence megaproject could be composed of a centralized headquarters to overview collaborations and provide economies of scale for AI precursors within a network of affiliated AI laboratories that conduct most of the research.171 Other authors argue that rather than focus on AI research in general, an international research collaboration could focus on the use of AI to solve problems in a specific field, such as climate change, health, privacy-enhancing technologies, economic measurement, or the sustainable development goals.172 

6.5 Critiques of this model: In general, there have been few sustained critiques of this institutional model. However, Ho and others suggest that an international collaboration to conduct technical AI-safety research might face challenges in that it might pull safety researchers away from the frontier AI developers, reducing in-house safety expertise. In addition, there are concerns that any international project that would need to access advanced AI models would run risks over security concerns and model leaking.173

Moreover, more fundamental critiques do exist; for instance, Kaushik and Korda critique the feasibility of a “Manhattan Project-like undertaking to address the ‘alignment problem’,” arguing that massively accelerating AI-safety research through any large-scale governmental project is infeasible. Moreover, they argue that it is an inappropriate analogy because the Manhattan Project offered a singular goal, whereas AI safety faces a situation where ‘“ten thousand researchers have ten thousand different ideas on what it means and how to achieve it.”174 

Model 7: Distribution of benefits and access 

7.1 Functions and types: The function of this institutional model is to provide access to the benefits of a technology or a global public good to those states or individuals who do not yet have it due to geographic or economic reasons, among others. Very often, the aim of such an institution is to facilitate unrestricted access or even access schemes targeted to the most needy and deprived. When the information or goods being shared can potentially pose a risk or be misused, yet responsible access is still considered a legitimate, necessary, or beneficial goal, institutions under this model tend to create a system for conditional access. 

7.2 Common examples: Examples of unrestricted benefit-distributor institutions include international public-private partnerships like Gavi, the Vaccine Alliance, and the Global Fund to Fight AIDS, Tuberculosis and Malaria.175 Examples of conditional benefit-distributor institutions might include the IAEA’s nuclear fuel bank.176

7.3 Underexplored examples: Examples that are not yet invoked in the AI context but that could be promising include the Nagoya Protocol’s Access and Benefit-sharing Clearing-House (ABS Clearing-House),177 the UN Climate Technology Centre and Network,178 and the United Nations Industrial Development Organization (UNIDO), which is tasked with helping build up industrial capacities in developing countries. 

7.4 Proposed AI institutions along this model: Stafford and Trager draw an analogy between the NPT and a potential international regime to govern transformative AI. The basis for this comparison is that both technologies are dual-use, both present risks even in civilian applications, and there are significant gaps in the access different states have to these technologies. Just like in the case of nuclear energy, in a scenario where there is a clear leader in the race to develop AI while others are lagging, it is mutually beneficial for the actors to enter a technology-sharing bargain. This way, the leader can ensure it will continue to be at the front of the race, while the laggards secure access to the technology. Stafford and Trager call this the “Hopeless Laggard effect.” To enforce this technology-sharing bargain in the sphere of transformative AI, an international institution would have to be created to conduct similar functions to the IAEA’s Global Nuclear Safety and Security Network, which transfers knowledge from countries with mature nuclear energy programs to those who are just starting to develop one. As an alternative, the authors suggest that the leader in AI could prevent the laggards from engaging in a race by sharing the wealth resulting from transformative AI.179 

The US’s National Security Commission on Artificial Intelligence’s final report included a proposal for an International Digital Democracy Initiative (IDDI) “with allies and partners to align international assistance efforts to develop, promote, and fund the adoption of AI and associated technologies that comports with democratic values and ethical norms around openness, privacy, security, and reliability.”180 

Ho and others envision a model that incorporates the private sector into the benefit-distribution dynamic. A “Frontier AI Collaborative” could spread the benefits of cutting-edge AI—including global resilience to misused or misaligned AI systems—by acquiring or developing AI systems with a pool of resources from member states and international development programs, or from AI laboratories. This form of benefit-sharing could have the additional advantage of incentivizing states to join an international AI governance regime in exchange for access to the benefits distributed by the collaborative.181 More broadly, the Elders suggest creating an institution analogous to the IAEA to guarantee that AI’s benefits are “shared with poorer countries.”182 In forthcoming work, Adan sketches key features for a Fair and Equitable Benefit Sharing Model, to “foster inclusive global collaboration in transformative AI development and ensure that the benefits of AI advancements are equitably shared among nations and communities.”183 

7.5 Critiques of this model: One challenge faced by benefit-distributor institutions is how to balance the risk of proliferation with ensuring meaningful benefits and take-up from its technology-promotional and distributive work.184 For instance, Ho and others suggest that proposals such as their Frontier AI Collaborative proposal could risk inadvertently diffusing dangerous dual-use technologies while simultaneously encountering barriers and obstacles to effectively empowering underserved populations with AI.185 

More fundamentally, potential challenges or concerns with global benefit- and access-providing institutions—especially those that involve some forms of conditional access—will likely see challenges (and critiques) on the basis of how they organize participation. In recent years, several researchers have argued that the global governance of AI is seeing only limited participation by states from the Global South;186 Veale and others have recently critiqued many initiatives to secure “AI for Good” or “responsible AI,” arguing that these have fallen into a “paradox of participation,” one involving “the surface-level participation of Global South stakeholders without providing the accompanying resources and structural reforms to allow them to be involved meaningfully.”187 It is likely that similar critiques will be raised against benefit-distributing institutions.

II. Directions for further research

In light of the literature review conducted in Part I, we can consider a range of additional directions for further research. Without intending to be exhaustive, this section discusses some of those directions briefly, offering some initial thoughts on the existing gaps in the current literature and how each line of research might be helpful to inform key decisions around the international governance of AI—around whether or when to create international institutions, what specific institutional models to prioritize, how to establish these institutions, and how to design them for effectiveness, amongst others.

Direction 1: Effectiveness of institutional models

In the above summary, we have outlined potential institutional models for AI without always making an assessment of their weaknesses or their effectiveness in meeting their stated goals. We believe such further analysis could be critical, however, to filter out models that would be apt to govern the risks from AI and reduce such risks de facto (not just de jure).

There is, of course, a live debate on the “effectiveness” of international law and institutions, with an extensive literature that tries to assess patterns of state compliance with different regimes in international law188 as well as more specific patterns affecting the efficacy of international organizations189 or their responsiveness to shifts in the underlying problem.190

Such work has highlighted the imperfect track record of many international treaties in meeting their stated purposes,191 the various ways in which states may aim to evade obligations even while complying with the letter of the law,192 the ways in which states may aim to use international organizations to promote narrow national interests rather than broader organizational objectives,193 and the situations under which states aim to exit, shift away from, or replace existing institutions with new alternatives.194 Against such work, other studies have explored the deep normative changes that international norms have historically achieved in topics such as the role of territorial war,195 the transnational and domestic mechanisms by which states are pushed to commit to and comply with different treaties,196 the more nuanced conditions that may induce greater or lesser state compliance with norms or treaties,197 the effective role that even nonbinding norms may play,198 as well as arguments that a narrow focus on state compliance with international rules understates the broader effects that those obligations may have on the way that states bargain in light of those norms (even when they aim to bend them).199 Likewise, there is a larger body of foundational work that considers whether traditional international law, based in state consent, might be an adequate tool to secure global public goods such as those around AI, even if states complied with their obligations.200 

Work to evaluate the (prospective) effectiveness of international institutions on AI could draw on this widespread body of literature to learn lessons from the successes and failures of past regimes, as well as on scholarship on the appropriate design of different bodies201 and measures to improve the decision-making performance of such organizations,202 in order to understand when or how any given institutional model might be most appropriately designed for AI.

Direction 2: Multilateral AI treaties without institutions 

While our review has focused on international AI governance proposals that would involve the establishment of some forms of international institutions, there are of course other models of international cooperation. Indeed, some types of treaties do not automatically establish distinct international organizations203 and primarily function by setting shared patterns of expectations and reciprocal behavior amongst states in order (ideally) to become self-enforcing. As discussed, our literature review omits discussing this type of regime. However, analyzing them in combination with the models we have outlined could be useful to determine international governance alternatives for AI, including whether or when state initiatives to establish such multilateral normative regimes that lack treaty bodies would likely be effective or might likely fall short.

Such an analysis could draw from a rich vein of existing proposals for new international treaties on AI. There have of course been proposals for new treaties for autonomous weapons.204 There are also proposals for international conventions to mitigate extreme risks from technology. Some of these, such as Wilson’s “Emerging Technologies Treaty”205 or Verdirame’s Treaty on Risks to the Future of Humanity,206 would address many types of potential existential risks simultaneously, including potential risks from AI. 

Other treaty proposals are focused more specifically on regulating AI risk in particular. Dewey discusses a potential “AI development convention” that would set down “a ban or set of strict safety rules for certain kinds of AI development.”207 Yet others address different types of risks from AI, such as Metzinger’s proposal for a global moratorium on artificial suffering.208 Carayannis and Draper discuss a “Universal Global Peace Treaty (UGPT), which would commit states “not to declare or engage in interstate war, especially via existential warfare, i.e., nuclear, biological, chemical, or cyber war, including AI- or ASI-enhanced war.” They would see this treaty supported by a separate Cyberweapons and AI Convention, which would commit as its main article that “each State Party to this Convention undertakes never in any circumstances to develop, produce, stockpile or otherwise acquire or retain: (1) cyberweapons, including AI cyberweapons; and (2) AGI or artificial superintelligence weapons.”209 

Notwithstanding these proposals, there are significant gaps in the scholarship surrounding the design of an international treaty for AI regulation. Some issues that we believe should be explored include, but are not limited to, the effects of reciprocity on the behavior of state parties, the relationship between the specificity of a treaty and its pervasiveness, the success and adaptability of the framework convention model (a broad treaty and protocols which specify the initial treaty’s obligations) in accomplishing their goals, and adjudicatory options for conflicts between state parties. 

Direction 3: Additional institutional models not covered in detail in this review

There are many other institutional models that this literature review does not address, as they are (currently) rarely proposed in the specific context of international AI governance. These include, but are not limited to:210

  • Various international non-governmental organizations (NGOs), e.g., the World Wide Fund for Nature (WWF), Amnesty International (AI);
  • Political and economic unions, e.g., Association of Southeast Asian Nations (ASEAN); 
  • Military alliances that establish security guarantees and/or political, economic, and defense cooperation, e.g., the North Atlantic Treaty Alliance (NATO), the Shanghai Cooperation Organisation (SCO), or the Collective Security Treaty Organization (CSTO); 
  • International courts and tribunals, e.g., the International Criminal Court (ICC), various regional courts of human rights (African Court on Human and Peoples’ Rights, the European Court of Human Rights, the Inter-American Court of Human Rights);
  • Interstate arbitral and dispute settlement bodies, e.g., the International Court of Justice (ICJ); the WTO Appellate Body, which hears disputes by WTO Members; the International Tribunal for the Law of the Sea (ITLOS), which is one of the dispute resolution mechanisms for the UN Convention on the Law of the Sea (UNCLOS); the Permanent Court of Arbitration (PCA), which resolves disputes arising out of international agreements between member states, international organizations, or private parties; or the European Nuclear Energy Tribunal, which oversees nuclear energy disputes within the OECD;
  • Cartels aimed at articulating, aggregating, and securing the (economic) interests of their members, e.g., the Organization of the Petroleum Exporting Countries (OPEC), whose members cooperate to reduce market competition but whose operations may be protected by the doctrine of states immunity under international law;
  • Policy implementation and/or direct service delivery organizations, e.g., the United Nations Development Programme (UNDP) or the World Bank;
  • Data gathering and dissemination organizations, e.g., the World Meteorological Organization’s (WMO) climate data monitoring or the Food and Agriculture Organization (FAO) gathering of statistics on global food production;
  • Post-disaster response and relief organizations, e.g., The World Food Programme (WFP) or the International Committee of the Red Cross (ICRC);
  • Capacity-building and training organizations, e.g., governmental capacity-building trainings offered by the United Nations Institute for Training and Research (UNITAR), fiscal management training programs offered by the International Monetary Fund (IMF), or border-control trainings provided by the International Organization for Migration (IOM);
  • Norm promotion organizations, e.g., the UNESCO World Heritage site program or UNHCR advocacy for refugee rights;
  • Awareness-raising organizations, e.g., the Joint United Nations Programme on HIV/AIDS (UNAIDS), which amongst others organizes World AIDS Day; and
  • Meta”-organizations which aim to support or enhance the activities of other existing international organizations in general, e.g., the United Nations Office for Project Services (UNOPS).

Accordingly, future lines of research could focus on exploring what such models could look like for AI and what they might contribute to international AI governance.

Direction 4: Compatibility of institutional functions 

There are multiple instances of compatibility between the functions of institutions proposed by the literature explored in this review. Getting a better sense of those areas of compatibility could be advantageous when designing an international institution for AI that borrows the best features from each model rather than copying a single model. Further research could explore hybrid institutions that combine functions from several models. Some potential combinations include, but are not limited to:

  • Comprehensive scientific consortia, which could combine elements from scientific consensus-building institutions, international joint scientific programs, and (scientific) benefit-distributing institutions;
  • Full-spectrum consensus-building fora, which could combine elements from scientific consensus-building with political consensus-building institutions and potentially with stabilization and emergency-response institutions;
  • Integrated regulator institutions, which could combine elements from regulatory and policy coordinator institutions with monitoring and verification institutions; and
  • Centralized control institutions, which could combine elements from nonproliferation, export-control institutions, with access-controlling institutions, and potentially with monitoring and verification institutions.

Direction 5: Potential fora for an international AI organization

This review omits establishing patterns among different proposals on their preferred fora to negotiate or host an international AI organization. While we do not expect there to be much commentary on this, it might be a useful additional element to take into consideration when designing an international AI institution. For example, some fora that have been proposed are:

  • The United Nations could establish a UN specialized agency through state negotiations or initiative at the UN General Assembly.211 For instance, as seen above, Kemp and others call for a UN AI Research Organization (UNAIRO).212 
  • Regional organizations, such as the European Union, the Organization of American States, the African Union, or ASEAN, could pioneer regional regulatory regimes that exert indirect extraterritorial effects on global AI governance. The European Union in particular has proven to be effective at indirectly regulating industries at a global level through the so-called Brussels Effect.213 Siegmann and Anderljung suggest that the EU AI Act could have a similar effect on the global AI industry.214
  • Similarly, minilateral club organizations like the G7, BRICS, the G20, or the OECD could play a similar role, bringing together like-minded countries to negotiate an international governance framework for AI that other states can then join.215
  • Public-private partnerships or coalitions between state and non-state actors, such as the Lysøen Initiative on human security216 or the Christchurch Call, an initiative (led by France and Aotearoa New Zealand) on eliminating online terrorist and violent extremist content,217 which can organize a coalition of like-minded states and actors to pursue the negotiation of new treaties, where necessary outside of UN fora.
  • Gradual formalization of initial informal institutions: in some cases, organizations that are initially established in an informal configuration could lay the foundation for formal frameworks for cooperation, as happened with the gradual transformation of the General Agreement on Tariffs and Trade (GATT)  into the WTO, and which Erdélyi and Goldsmith suggest as one route that could be taken by an International Artificial Intelligence Organization.218

This does not exhaust the available or feasible avenues, however. In many cases, significant additional work will have to be undertaken to evaluate these pathways in detail.

Conclusion

This literature review analyzed seven models for the international governance of AI, discussing common examples of those models, underexplored examples, specific proposals of their application to AI in existing scholarship, and critiques. We found that, while the literature covers a wide range of options for the international governance of AI, most of the time proposals are vague and do not develop the specific attributes an international institution would need to have in order to garner the benefits and curb the risks associated with AI. Thus, we proposed a series of pathways for further research that we expect should contribute to the design of such an international institution.


Also in this series

Share
Cite

Maas, Matthijs, and Villalobos, José Jaime. ‘International AI institutions: A literature review of models, examples, and proposals.’ Institute for Law & AI, AI Foundations Report 1. (2023). https://www.law-ai.org/international-ai-institutions

International AI institutions
A literature review of models, examples, and proposals
Matthijs Maas, José Jaime Villalobos
Full text PDFs
International AI institutions
A literature review of models, examples, and proposals
Matthijs Maas, José Jaime Villalobos
URL Links