International law and advanced AI: exploring the levers for ‘hard’ control
If and when states choose to institute binding international rules on the development and use of artificial intelligence, there are three promising areas where they might act.
The question of how artificial intelligence (AI) is to be governed has risen rapidly up the global agenda – and in July 2023, United Nations Secretary-General António Guterres raised the possibility of the “creation of a new global body to mitigate the peace and security risks of AI.” While the past year has seen the emergence of multiple initiatives for AI’s international governance – by states, international organizations and within the UN system – most of these remain in the realm of non-binding ‘soft law.’ However, many influential voices in the debate are increasingly arguing that the challenge of future AI systems means that international AI governance would eventually need to include elements that are legally binding.
If and when states choose to take up this challenge and institute binding international rules on advanced AI – either under a comprehensive global agreement, or between a small group of allied states – there are three principal areas where such controls might usefully bite. First, states might agree to controls on particular end uses of AI that are considered most risky or harmful, drawing on the European Union’s new AI Act as a general model. Second, controls might be introduced on the technology itself, structured around the development of certain types of AI systems, irrespective of use – taking inspiration from arms control regimes and other international attempts to control or set rules around certain forms of scientific research. Third, states might seek to control the production and dissemination of the industrial inputs that power AI systems – principally the computing power that drives AI development – harmonizing export controls and other tools of economic statecraft.
Ahead of the upcoming United Nations Summit of the Future and the French-hosted international AI summit in 2025, this post explores these three possible control points and the relative benefits of each in addressing the challenges posed by advanced AI. It also addresses the structural questions and challenges that any binding regime would need to address – including its breadth in terms of state participation, how participation might be incentivized, the role that private sector AI labs might play, and the means by which equitable distribution of AI’s benefits could be enabled. This post is informed by ongoing research projects into the future of AI international governance undertaken by the Institute for Law & AI, Lawfare’s Legal AI Safety Initiative, and others.
Hard law approaches to AI governance
The capabilities of AI systems have advanced rapidly over the past decade. While these systems present significant opportunities for societal benefit, they also engender new risks and challenges. Possible risks from the next wave of general-purpose foundation models, deemed “frontier” or “advanced AI,” include increases in inequality, misuse by harmful actors, and dangerous malfunctions. Moreover, AI agents that are able to make and execute long-term plans may soon proliferate, and would pose particular challenges.
As a result of these developments, states are beginning to take concrete steps to regulate AI at the domestic level. This includes the United States’ Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, the European Union’s AI Act, the UK’s AI White Paper and subsequent public consultation, and Chinese laws covering both the development and use of various AI systems. At the same time, given the rapid pace of change and cross-border nature of AI development and potential harms, it is increasingly recognized that domestic regulation alone will likely not be adequate to address the full spread of challenges that advanced AI systems pose.
As a result, recent years have also witnessed the emergence of a growing number of initiatives for international coordination of AI policy. In the twenty months since the launch of OpenAI’s ChatGPT propelled AI to the top of the policy agenda, we have seen two international summits on AI safety; the Council of Europe conclude its Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law; the G7 launch its Hiroshima Process on responsible AI governance; and the UN launch an Advisory Body on international AI governance.
These ongoing initiatives are unlikely to represent the limits of states’ ambitions for AI coordination on the international plane. Indeed, should the pace of AI capability development continue as it has over the last decade, it seems likely that in the coming years states may choose to pursue some form of binding ‘hard law’ international governance for AI – moving beyond the mostly soft law commitments that have characterized today’s diplomatic efforts. Geopolitical developments, a rapid jump in AI capabilities, or a significant AI security incident or crisis, might also lead states to come to support a hard law approach. Throughout the course of 2023, several influential participants in the debate began to raise the possibility that binding international governance may be necessary, once AI systems reach a certain capability level – including most notably AI lab OpenAI. A number of political and moral authorities have gone beyond this and called for the immediate institution of binding international controls on AI – including the influential group of former politicians The Elders who have called for an “international treaty establishing a new international AI safety agency,” and Pope Francis who has urged the global community to adopt a “binding international treaty that regulates the development and use of artificial intelligence in its many forms.”
To date these calls for binding international governance have only been made at a high level of abstraction, without inclusion of detailed proposals for how a binding international AI governance regime might be structured or what activities should be controlled. Moreover, the advanced state of the different soft law approaches currently in progress mean that the design and legal form of any hard law regime that is eventually instituted would be heavily conditioned by other AI governance initiatives or institutions that precede it. Nevertheless, given the significant possibility of states beginning discussion of binding AI governance in the coming years, there is value in surveying the areas where controls could be implemented, assessing the contribution these controls might make in addressing the challenges of AI, and identifying the relevant institutional antecedents.
Three control points
There are three main areas where binding international controls on AI might bite: on particular ‘downstream’ uses of AI, on the upstream ‘development’ of AI systems, and on the industrial inputs that underpin the development of AI systems.
Downstream uses of AI
If the primary motivation behind states introducing international controls is a desire to mitigate the perceived risks from advanced AI, then the most natural approach would be to structure those controls around the particular AI uses that are considered to pose the greatest level of risk. The most prominent domestic AI regulation – the European Union’s AI Act – follows this approach, introducing different tiers of control for uses of AI systems based around the perceived risk of those use cases. Those that are deemed most harmful – for example the use of AI for social-scoring or in biometric systems put in place to predict criminality – are prohibited outright.
This form of control could be replicated at an international level. Existing international law imposes significant constraints on certain uses of AI – such as the protections provided by international human rights law and international humanitarian law. However, explicitly identifying and controlling particular harmful AI uses would add an additional layer of granularity to these constraints. Should states wish to do so, arms control agreements offer one model for how this could be done.
The principal benefit of a use-based approach to international control of AI is its simplicity: where particular AI uses are most harmful, they can be controlled or prohibited. States should in theory also be able to update any new treaty regime, adding additional harmful uses of AI to a controlled list should they wish to do so – and if they are able to agree on these. Nevertheless, structuring international controls solely around identified harmful uses of AI also has certain limitations. Most importantly, while such a use-based governance regime would have a significant impact in addressing the risks posed by the deliberate misuse of AI, its impact in reducing other forms of AI risk is less clear.
As reported by the 2024 International Scientific Report on the Safety of Advanced AI, advanced AI systems may also pose risks stemming from the potential malfunction of those systems – regardless of their particular application or form of use. The “hallucinations” generated by the most advanced chatbots, in spite of their developers best intentions, are an early example of this. At the extreme, certain researchers have posited that developers might lose the ability to control the most advanced systems. The malfunction or loss of control of more advanced systems could have severe implications as these systems are increasingly incorporated into critical infrastructure systems, such as energy, financial or cyber security networks. For example, a malfunction of an AI system incorporated into military systems, such as nuclear command, control and communication infrastructure, might lead to catastrophic consequences. Use-based governance may be able to address this issue in part, by regulating the extent to which AI technology is permitted to be integrated into critical infrastructure at all – but such a form of control would not address the possibility of unexpected malfunction or loss of control of an AI system used in a permitted application.
Upstream development of AI
Given the possibility of dangerous malfunctions in advanced AI systems, a complementary approach would be to focus on the technology itself. Such an approach would entail structuring an international regime around controls on the upstream development of AI systems, rather than particularly harmful applications or uses.
International controls on upstream AI development could be structured in a number of ways. Controls could focus on security measures. They could include the introduction of mandatory information security or other protective requirements, to ensure that key components of advanced AI systems, such as model weights, cannot leak or be stolen by harmful actors or geopolitical rivals. The regime might also require the testing of AI systems against agreed safety metrics prior to release, with AI systems that fail prohibited from release until they can be demonstrated to be safe. Alternatively, international rules might focus on state jurisdiction compliance with agreed safety and oversight standards, rather than focusing on the safety of individual AI systems or training runs.
Controls could focus on increasing transparency or other confidence-building measures. States could introduce a mandatory warning system should AI models reach certain capability thresholds, or should there be an AI security incident. A regime might also include a requirement to notify other state parties – or the treaty body, if one was created – before beginning training of an advanced AI system, allowing states to convene and discuss precautionary measures or mitigations. Alternatively, the regime could require that other state parties or the treaty body give approval before advanced systems are trained.If robustly enforced, structuring controls around AI development would contribute significantly towards addressing the security risks posed by advanced AI systems. However, this approach to international governance also has its challenges. In particular, given that smaller AI systems are unlikely to pose significant risks, participants in any regime would likely need to also agree on thresholds for the introduction of controls – with these only applying to AI systems of a certain size or anticipated capability level. Provision may be needed to periodically update this threshold, in line with technological advances. In addition, given the benefits that advanced AI is expected to bring, an international regime controlling AI development would need to also include provision for the continued safe development of advanced AI systems above any capability threshold.
Industrial inputs: AI compute
Finally, a third approach to international governance would be for states to move another step back and focus on the AI supply chain. Supply-side controls of basic inputs have been successful in the past in addressing the challenges posed by advanced technology. An equivalent approach would involve structuring international controls around the industrial inputs necessary for the development of advanced AI systems, with a view to shaping the development of those systems.
The three principal inputs used to train AI systems are computing power, data and algorithms. Of these, computing power (“compute”) is the most viable node for control by states, and hence the focus of this section. This is because AI models are trained on physical semiconductor chips, that are by their nature quantifiable (they can be counted), detectable (they can be identified and physically tracked), and excludable (they can be restricted). The supply chain for AI chips is also exceptionally concentrated. These properties mean that controlling the distribution of AI compute would likely be technologically feasible – should states be able to agree on how to do so.
International agreements on the flow and usage of AI chips could assist in reducing the risks from advanced AI in a number of different ways. Binding rules around the flow of AI chips could be used to augment or enforce a wider international regime covering AI uses or development – for example by denying these chips to states who violate the regime or to non-participating states. Alternatively, international controls around AI industrial inputs might be used to directly shape the trajectory of AI development, through directing the flow of chips towards certain actors, potentially mitigating the need to control downstream uses or upstream development of AI systems at all. Future technological advances may also make it possible to monitor the use of individual semiconductor chips – which would be useful in verifying compliance with any binding international rules around the development of AI systems.
Export control law can provide the conceptual basis for international control of AI’s industrial inputs. The United States has already introduced a sweeping set of domestic laws controlling the export of semiconductors, with a view to restricting China’s ability to acquire the chips needed to develop advanced AI and to maintaining the U.S. technological advantage in this space. These U.S. controls could be used as the basis for an expanded international semiconductor export control regime, between the U.S. and its allies. Existing or historic multilateral export control regimes could also serve as a model for a future international agreement on AI compute exports. This includes the Cold War-era Coordinating Committee for Multilateral Export Controls (COCOM), under which Western states coordinated an arms embargo on Eastern Bloc countries, and its successor Wassenaar Arrangement, through which Western states harmonize controls on exports of conventional arms and dual-use items.
In order to be effective, controls on the export of physical AI chips would likely need to be augmented by restrictions on the proliferation of both AI systems themselves and of the technology necessary for the development of semiconductor manufacturing capability outside of participating states. Precedent for such a provision can be found in a number of international arms control agreements. For example, Article 1 of the Nuclear Non-Proliferation Treaty prohibits designated nuclear weapon states from transferring nuclear weapons or control over such weapons to any recipient, and from assisting, encouraging or inducing non-nuclear weapon states to manufacture or acquire the technology to do so. A similar provision controlling the exports of semiconductor design and manufacturing technology – perhaps again based on existing U.S. export controls – could be included in an international AI regime.
Structural challenges
A binding regime for governing advanced AI agreed upon by states incorporating any of the above controls would face a number of structural challenges.
Private sector actors
The first of these stems from the nature of the current wave of AI development. Unlike many of the Twentieth Century’s most significant AI advances, which were developed by governments or academia, the most powerful AI models today are almost exclusively designed in corporate labs, trained using private sector-produced chips, and run on commercial cloud data centers. While certain AI companies have experimented with corporate structures such as a long-term benefit trust or capped profit provision, commercial concerns are the major driver behind most of today’s AI advances – a situation that is likely to continue in the near future, pending significant government investment in AI capabilities.
As a result, a binding international regime aiming to control AI use or development would require a means of legally ensuring the compliance of private sector AI labs. This could be achieved through the imposition of obligations on participating state parties to implement the regime through domestic law. Alternatively the treaty instituting the regime could impose direct obligations on corporations – a less common approach in international law. However, even in such a situation the primary responsibility for enforcing the regime and remedying breaches would likely still fall on states.
Breadth of state participation
A further issue relates to the breadth of state participation in any binding international regime: should this be targeted or comprehensive? At present, the frontier of the AI industry is concentrated in a small number of countries. A minilateral agreement concluded between a limited group of states (such as between the U.S. and its allies) would almost certainly be easier to reach consensus on than a comprehensive global agreement. Given the pace of AI development, and concerns regarding the capabilities of the forthcoming generation of advanced models, there is significant reason to favor the establishment of a minimally viable international agreement, concluded as quickly as possible.
Nevertheless, a major drawback of a minilateral agreement conducted between a small group of states – in contrast to a comprehensive global agreement – would be the issue of legitimacy. Although AI development is currently concentrated in a small number of states, any harms that result from the misuse or malfunction of AI systems are unlikely to remain confined within the borders of those states. In addition, citizens of the Global South may be least likely to realize the economic benefits that result from AI technological advances. As such, there is a strong normative argument for giving a voice to a broad group of states in the design of any international regime intended to govern its development – not simply those that are currently most advanced in terms of AI capabilities. In the absence of this, any regime would likely suffer from a critical absence of global legitimacy, potentially threatening both its longevity and the likelihood of other states later agreeing to join.
A minilateral agreement aiming to institute binding international rules to govern AI would therefore need to include a number of provisions to address these legitimacy issues. First, while it may end up as more practicable to initially establish governance amongst a small group of states, it would greatly aid legitimacy if participants were to explicitly commit to working towards the establishment of a global regime, and open the regime for all states to theoretically join, provided they agreed to the controls and any enforcement mechanisms. Precedent for such a provision can be found in other international agreements – for example the 1990 Chemical Weapons Accord between the U.S. and the USSR, which included a pledge to work towards a global prohibition on chemical weapons, and eventually led to the establishment of the 1993 Chemical Weapons Convention which is open to all states to join.
Incentives and distribution
This brings us to incentives. In order to encourage broad participation in the regime, states with less developed artificial intelligence sectors may need to be offered inducements to join – particularly given that doing so might curtail their freedom to develop their own domestic AI capabilities. One way to do so would be to include a commitment from leading AI states to distribute the benefits of AI advances to less developed states, conditional on those participants committing to not violating the restrictive provisions of the agreement – a so-called ‘dual mandate.’
Inspiration for such an approach could be drawn from the Nuclear Non-Proliferation Treaty, under which non-nuclear weapon participants agree to forgo the right to develop nuclear weapons in exchange for the sharing of “equipment, materials and scientific and technological information for the peaceful uses of nuclear energy.” An equivalent provision under an AI governance regime might for example grant participating states the right to access the most advanced systems, for public sector or economic development purposes, and promise assistance in incorporating these systems into beneficial use cases.
The international governance of AI remains a nascent project. Whether binding international controls of any form come to be implemented in the near future will depend upon a range of variables and political conditions. This includes the direction of AI technological developments and the evolution of relations between leading AI states. As such, the feasibility of a binding international governance regime for AI remains to be seen. In light of 2024’s geopolitical tensions, and the traditional reticence from the U.S. and China to agree to international law restrictions that infringe on sovereignty or national security, binding international AI governance appears unlikely to be established immediately.
However, this position could rapidly change. Technological or geopolitical developments – such as a rapid and unexpected jump in AI capabilities, a shift in global politics, or an AI-related security incident or crisis with global impact – could act as forcing mechanisms leading states to come to support the introduction of international controls. In such a scenario, states will likely wish to implement these quickly, and will require guidance on both the form these controls should take and how they might be enacted.
Historical analogy suggests that international negotiations of equivalent magnitude to the challenges AI will pose typically take many years to conclude. It took over ten years from the initial UN discussions around international supervision of nuclear material for the statute of the International Atomic Energy Agency to be negotiated. In the case of AI, states will likely not have this long. Given the stakes at hand, lawyers and policymakers should therefore begin consideration both of the form that future international AI governance should take, and how this might be implemented, as a matter of urgency.