What should be internationalised in AI governance?
Abstract
As artificial intelligence (AI) advances, states increasingly recognise the need for international governance to address shared benefits and challenges. However, international cooperation is complex and costly, and not all AI issues require cooperation at the international level. This paper presents a novel framework to identify and prioritise AI governance issues warranting internationalisation. We analyse nine critical policy areas across data, compute, and model governance using four factors which broadly incentivise states to internationalise governance efforts: cross-border externalities, regulatory arbitrage, uneven governance capacity, and interoperability. We find strong benefits of internationalisation in compute-provider oversight, content provenance, model evaluations, incident monitoring, and risk management protocols. In contrast, the benefits of internationalisation are lower or mixed in data privacy, data provenance, chip distribution, and bias mitigation. These results can guide policymakers and researchers in prioritising international AI governance efforts.
Commerce just proposed the most significant federal AI regulation to date – and no one noticed
A little more than a month ago, the Bureau of Industry and Security (“BIS”) proposed a rule that, if implemented, might just be the most significant U.S. AI regulation to date. The proposed rule has received relatively scant media attention as compared to more ambitious AI governance measures like California’s SB 1047 or the EU AI Act, to no one’s surprise—regulation is rarely an especially sexy topic, and the proposed rule is a dry, common-sense, procedural measure that doesn’t require labs to do much of anything besides send an e-mail or two to a government agency once every few months. But the proposed rule would allow BIS to collect a lot of important information about the most advanced AI models, and it’s a familiar fact of modern life that complex systems like companies and governments and large language models thrive on a diet of information.
This being the case, anyone who’s interested in what the U.S. government’s approach to frontier AI regulation is likely to look like would probably be well-served by a bit of context about the rule and its significance. If that’s you, then read on.
What does the proposed rule do?
Essentially, the proposed rule would allow BIS to collect information on a regular basis about the most advanced AI models, which the proposed rule calls “dual-use foundation models.”[ref 1] The rule provides that any U.S. company that plans to conduct a sufficiently large AI model training run[ref 2] within the next six months must report that fact to BIS on a quarterly basis (i.e., once every three months, by specified reporting deadlines). Companies that plan to build or acquire sufficiently large computing clusters for AI training are similarly required to notify BIS.
Once a company has notified BIS of qualifying plans or activities, the proposed rule states that BIS will send the company a set of questions, which must be answered within 30 days. BIS can also send companies additional “clarification questions” after receiving the initial answers, and these clarification questions must be answered within 7 days.
The proposed rule includes a few broad categories of information that BIS will certainly collect. For instance, BIS is required under the rule to ask companies to report the results of any red-teaming safety exercises conducted and the physical and cybersecurity measures taken to protect model weights. Importantly, however, the proposed rule would not limit BIS to asking these questions—instead, it provides that BIS questions “may not be limited to” the listed topics. In other words, the proposed rule would provide BIS with extremely broad and flexible information-gathering capabilities.
Why does the proposed rule matter?
The NPRM doesn’t come as a surprise—observers have been expecting something like it for a while, because President Biden ordered the Department of Commerce to implement reporting requirements for “dual-use foundation models” in § 4.2(a) of Executive Order 14110. Also, BIS previously sent out an initial one-off survey to selected AI companies in January 2024, collecting information similar to the information that will be collected on a more regular basis under the new rule.
But while the new proposed rule isn’t unexpected, it is significant. AI governance researchers have emphasized the importance of reporting requirements, writing of a “growing consensus among experts in AI safety and governance that reporting safety information to trusted actors in government and industry is key” for responding to “emerging risks presented by frontier AI systems.” And most of the more ambitious regulatory frameworks for frontier AI systems that have been proposed or theorized would require the government to collect and process safety-relevant information. Doing this effectively—figuring out what information needs to be collected and what the collected information means—will require institutional knowledge and experience, and collecting safety information under the proposed rule will allow BIS to cultivate that knowledge and experience internally. In short, the proposed rule is an important first step in the regulation of frontier models.
Labs already voluntarily share some safety information with the government, but these voluntary commitments have been criticized as “vague, sensible-sounding pledge[s] with lots of wiggle room,” and are not enforceable. In short, voluntary commitments obligate companies only to share whatever information they want to share, whenever they want to share it. The proposed rule, on the other hand, would be legally enforceable, with potential civil and criminal penalties for noncompliance, and would allow BIS to choose what information to collect.
Pushback and controversy
Like other recent attempts to regulate frontier AI developers, the proposed rule has attracted some amount of controversy. However, the recently published public comments on the rule seem to indicate that the rule is unlikely to be challenged in court—and that, unless the next presidential administration decides to change course and scrap the proposed rule, reporting requirements for dual-use foundation models are here to stay.
The proposed rule and the Defense Production Act
As an executive-branch agency, BIS typically only has the legal authority to issue regulations if some law passed by Congress authorizes the kind of regulation contemplated. According to BIS, congressional authority for the proposed rule comes from § 705 of the Defense Production Act (“DPA”).
The DPA is a law that authorizes the President to take a broad range of actions in service of “the national defense.” The DPA was initially enacted during the Korean War and used solely for purposes related to defense industry production. Since then, Congress has renewed the DPA a number of times and has significantly expanded the statute’s definition of “national defense” to include topics such as “critical infrastructure protection and restoration,” “homeland security,” “energy production,” and “space.”
Section 705 of the DPA authorizes the President to pass regulations and conduct industry surveys to “obtain such information… as may be necessary or appropriate, in his discretion, to the enforcement or administration of [the DPA].” While § 705 is very broadly worded, and on its face appears to give the President a great deal of discretionary authority to collect all kinds of information, it has historically been used primarily to authorize one-off “industrial base assessment” surveys of defense-relevant industries. These assessments have typically been time-bounded efforts to analyze the state of a specified industry that result in long “assessment” documents. Interestingly enough, BIS has actually conducted an assessment of the artificial intelligence industry once before—in 1994.[ref 3]
Unlike past industrial base assessments, the proposed rule would allow the federal government to collect information from industry actors on an ongoing basis, indefinitely. This means that the kind of information BIS requests and the purposes it uses that information for may change over time in response to advances in AI capabilities and in efforts to understand and evaluate AI systems. And unlike past assessment surveys, the rule’s purpose is not simply to aid in the preparation of a single snapshot assessment of the industry. Instead, BIS intends to use the information it collects to “ensure that the U.S. Government has the most accurate, up-to-date information when making policy decisions” about AI and the national defense.
Legal and policy objections to reporting requirements under Executive Order 14110
After Executive Order 14110 was issued in October 2023, one of the most common criticisms of the more-than-100-page order was that its reliance on the DPA to justify reporting requirements was unlawful. This criticism was repeated by a number of prominent Republican elected officials in the months following the executive order’s publication in October 2023, and the prospect of a lawsuit challenging the legality of reporting requirements under the executive order was widely discussed. But while these criticisms were based on legitimate and understandable concerns about the separation of powers and the scope of executive-branch authority, they were not legally sound. Ultimately, any lawsuit challenging the proposed rule would likely need to be filed by the leading AI labs who are subject to the rule’s requirements, and none of those labs seem inclined to raise the kind of fundamental objections to the rule’s legality that early reactions to the executive order contemplated.
The basic idea behind the criticisms of the executive order was that it used the DPA in a novel way, to do something not obviously related to the industrial production of military materiel. To some skeptics of the Biden administration, or observers generally concerned about the concentration of political power in the executive branch, the executive order looked like an attempt to use emergency wartime powers in peacetime to increase the government’s control over private industry. The public comment[ref 4] on BIS’s proposed rule by the Americans for Prosperity Foundation (“AFP”), a libertarian advocacy group, is a representative articulation of this perspective. AFP argues that the DPA is an “emergency” statute that should not be used in non-emergencies for purposes not directly related to defense industry production.
This kind of concern about peacetime abuses of DPA authority is not new. President George H.W. Bush, after signing a bill reauthorizing the DPA in 1992, remarked that using § 705 during peacetime to collect industrial base data from American companies would “intrude inappropriately into the lives of Americans who own and work in the Nation’s businesses.” And former federal judge Jamie Baker, in an excellent paper from 2021 on the DPA’s potential as an AI governance tool, predicted that the use of § 705 to collect information about AI to collect information from “private companies engaged in AI research” would meet with “challenge and controversy.”
Still, to quote from Judge Baker’s piece again, “Section 705 is clearly written and the authority it presents is strong.” Nothing in the DPA indicates that industrial base surveys under § 705 cannot be continuously ongoing, or that the DPA generally can only be used for encouraging increased defense industry production. It’s true that § 705, and related regulations, both focus on gathering information about the capacity of the U.S. industrial base to support “the national defense”—but recall that the DPA defines the term “national defense” very broadly, to include a wide variety of non-military considerations such as critical infrastructure protection. Moreover, the DPA generally has been used for purposes not directly related to defense industry production by Presidents from both parties for decades. For example, DPA authorities have been used to supply California with natural gas during the 2000-2001 energy crisis and to block corporate acquisitions that would have given Chinese companies ownership interests in U.S. semiconductor companies. In short, while critics of the proposed rule can reasonably argue that using the DPA in novel ways to collect information from private AI companies is bad policy, and politically undesirable, it’s much harder to make a reasonable argument against the legality of the proposed rule.
Also, government access to up-to-date information about frontier models may be more important to national security, and even to military preparedness specifically, than the rule’s critics anticipate. A significant portion of the Notice of Proposed Rulemaking in which BIS introduced the proposed rule is devoted to justifying the importance of the rule to “the national defense” and “the defense industrial base.” According to BIS, integrating dual-use foundation models into “military equipment, signal intelligence devices, and cybersecurity software” could soon become important to the national defense. Therefore, BIS claims, the government needs access to information from developers both to determine whether government action to stimulate further dual-use foundation model development is needed and “to ensure that dual-use foundation models operate in a safe and reliable manner.”
In any event, any lawsuit challenging the proposed rule would probably have to be brought by one of the labs subject to the reporting requirements.[ref 5] A few leading AI labs have submitted public comments on the rule, but none expressed any objection to the basic concept of an ongoing system of mandatory reporting requirements for dual-use foundation model developers. Anthropic’s comment only requests that the reporting requirements should be semiannual rather than quarterly, that labs should have more time to respond to questions, and that BIS should tweak some of the definitions in the proposed rule and take steps to ensure that the sensitive information contained in labs’ responses is handled securely. OpenAI’s comment goes a bit further, asking (among other requests) that BIS limit itself to collecting only “standardized” information relevant to national security concerns and to using information collected “for the sole purpose to ensure [sic] and verify the continuous availability of safe, reliable, and effective AI.” But neither those labs nor any of their competitors has voiced any fundamental objection to the basic idea of mandatory reporting requirements that allow the government to collect safety information about dual-use foundation models. This is unsurprising given that these and other leading AI companies have already committed to voluntarily sharing similar information with the US and other governments. In other words, while it’s too soon to be certain, it looks like the reporting requirements are unlikely to be challenged in court for the time being.
Conclusion
“Information,” according to LawAI affiliate Noam Kolt and his distinguished co-authors, “is the lifeblood of good governance.” The field of AI governance is still in its infancy, and at times it seems like there’s near-universal agreement on the need for the federal government to do something and near-universal disagreement about what exactly that something should be. Establishing a flexible system for gathering information about the most capable models, and building up the government’s capacity for collecting and processing that information in a secure and intelligent way, seems like a good first step. The regulated parties, who have voluntarily committed to sharing certain information with the government and have largely chosen not to object to the idea of ongoing information-gathering by BIS, seem to agree. In an ideal world, Congress would pass a law explicitly authorizing such a system; maybe someday it will. In the meantime, it seems likely that BIS will implement some amended version of its proposed rule in the near future, and that the result will, for better or worse, be the most significant federal AI regulation to date.
Last edited on: October 30, 2024
The governance misspecification problem
Abstract
Legal rules promulgated to govern emerging technologies often rely on proxy terms and metrics in order to indirectly effectuate background purposes. A common failure mode for this kind of rule occurs when, due to incautious drafting or unforeseen technological developments, a proxy ceases to function as intended and renders a rule ineffective or counterproductive. Borrowing a concept from the technical AI safety literature, we call this phenomenon the “governance misspecification problem.” This article draws on existing legal-philosophical discussions of the nature of rules to define governance misspecification, presents several historical case studies to demonstrate how and why rules become misspecified, and suggests best practices for designing legal rules to avoid misspecification or mitigate its negative effects. Additionally, we examine a few proxy terms used in existing AI governance regulations, such as “frontier AI” and “compute thresholds,” and discuss the significance of the problem of misspecification in the AI governance context.
Legal considerations for defining “frontier model”
Abstract
Many proposed laws and rules for the regulation of artificial intelligence would distinguish between a category consisting of the most advanced models—often called “frontier models”—and all other AI systems. Legal rules that make this distinction will typically need to include or reference a definition of “frontier model” or whatever analogous term is used. The task of creating this definition implicates several important legal considerations. The role of statutory and regulatory definitions in the overall definitional scheme should be considered, as should the advantages and disadvantages of incorporating elements such as technical inputs, capability metrics, epistemic elements, and deployment context into a definition. Additionally, existing legal obstacles to the rapid updating of regulatory definitions should be taken into account—including recent doctrinal developments in administrative law such as the elimination of Chevron deference and the introduction of the major questions doctrine.
I. Introduction
One of the few concrete proposals on which AI governance stakeholders in industry[ref 1] and government[ref 2] have mostly[ref 3] been able to agree is that AI legislation and regulation should recognize a distinct category consisting of the most advanced AI systems. The executive branch of the U.S. federal government refers to these systems, in Executive Order 14110 and related regulations, as “dual-use foundation models.”[ref 4] The European Union’s AI Act refers to a similar class of models as “general-purpose AI models with systemic risk.”[ref 5] And many researchers, as well as leading AI labs and some legislators, use the term “frontier models” or some variation thereon.[ref 6]
These phrases are not synonymous, but they are all attempts to address the same issue—namely that the most advanced AI systems present additional regulatory challenges distinct from those posed by less sophisticated models. Frontier models are expected to be highly capable across a broad variety of tasks and are also expected to have applications and capabilities that are not readily predictable prior to development, nor even immediately known or knowable after development.[ref 7] It is likely that not all of these applications will be socially desirable; some may even create significant risks for users or for the general public.
The question of precisely how frontier models should be regulated is contentious and beyond the scope of this paper. But any law or regulation that distinguishes between “frontier models” (or “dual-use foundation models,” or “general-purpose AI models with systemic risk”) and other AI systems will first need to define the chosen term. A legal rule that applies to a certain category of product cannot be effectively enforced or complied with unless there is some way to determine whether a given product falls within the regulated category. Laws that fail to carefully define ambiguous technical terms often fail in their intended purposes, sometimes with disastrous results.[ref 8] Because the precise meaning of the phrase “frontier model” is not self-evident,[ref 9] the scope of a law or regulation that targeted frontier models without defining that term would be unacceptably uncertain. This uncertainty would impose unnecessary costs on regulated companies (who might overcomply out of an excess of caution or unintentionally undercomply and be punished for it) and on the public (from, e.g., decreased compliance, increased enforcement costs, less risk protection, and more litigation over the scope of the rule).
The task of defining “frontier model” implicates both legal and policy considerations. This paper provides a brief overview of some of the most relevant legal considerations for the benefit of researchers, policymakers, and anyone else with an interest in the topic.
II. Statutory and Regulatory Definitions
Two related types of legal definition—statutory and regulatory—are relevant to the task of defining “frontier model.” A statutory definition is a definition that appears in a statute enacted by a legislative body such as the U.S. Congress or one of the 50 state legislatures. A regulatory definition, on the other hand, appears in a regulation promulgated by a government agency such as the U.S. Department of Commerce or the California Department of Technology (or, less commonly, in an executive order).
Regulatory definitions have both advantages and disadvantages relative to statutory definitions. Legislation is generally a more difficult and resource-intensive process than agency rulemaking, with additional veto points and failure modes.[ref 10] Agencies are therefore capable of putting into effect more numerous and detailed legal rules than Congress can,[ref 11] and can update those rules more quickly and easily than Congress can amend laws.[ref 12] Additionally, executive agencies are often more capable of acquiring deep subject-matter expertise in highly specific fields than are congressional offices due to Congress’s varied responsibilities and resource constraints.[ref 13] This means that regulatory definitions can benefit from agency subject-matter expertise to a greater extent than can statutory definitions, and can also be updated far more easily and often.
The immense procedural and political costs associated with enacting a statute do, however, purchase a greater degree of democratic legitimacy and legal resiliency than a comparable regulation would enjoy. A number of legal challenges that might persuade a court to invalidate a regulatory definition would not be available for the purpose of challenging a statute.[ref 14] And since the rulemaking power exercised by regulatory agencies is generally delegated to them by Congress, most regulations must be authorized by an existing statute. A regulatory definition generally cannot eliminate or override a statutory definition[ref 15] but can clarify or interpret. Often, a regulatory regime will include both a statutory definition and a more detailed regulatory definition for the same term.[ref 16] This can allow Congress to choose the best of both worlds, establishing a threshold definition with the legitimacy and clarity of an act of Congress while empowering an agency to issue and subsequently update a more specific and technically informed regulatory definition.
III. Existing Definitions
This section discusses five noteworthy attempts to define phrases analogous to “frontier model” from three different existing measures. Executive Order 14110 (“EO 14110”), which President Biden issued in October 2023, includes two complementary definitions of the term “dual-use foundation model.” Two definitions of “covered model” from different versions of the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, a California bill that was recently vetoed by Governor Newsom, are also discussed, along with the EU AI Act’s definition of “general-purpose AI model with systemic risk.”
A. Executive Order 14110
EO 14110 defines “dual-use foundation model” as:
an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:
(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;
(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.
Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities.[ref 17]
The executive order imposes certain reporting requirements on companies “developing or demonstrating an intent to develop” dual-use foundation models,[ref 18] and for purposes of these requirements it instructs the Department of Commerce to “define, and thereafter update as needed on a regular basis, the set of technical conditions for models and computing clusters that would be subject to the reporting requirements.”[ref 19] In other words, EO 14110 contains both a high-level quasi-statutory[ref 20] definition and a directive to an agency to promulgate a more detailed regulatory definition. The EO also provides a second definition that acts as a placeholder until the agency’s regulatory definition is promulgated:
any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations[ref 21]
Unlike the first definition, which relies on subjective evaluations of model characteristics,[ref 22] this placeholder definition provides a simple set of objective technical criteria that labs can consult to determine whether the reporting requirements apply. For general-purpose models, the sole test is whether the model was trained on computing power greater than 1026 integer or floating-point operations (FLOP); only models that exceed this compute threshold[ref 23] are deemed “dual-use foundation models” for purposes of the reporting requirements mandated by EO 14110.
B. California’s “Safe and Secure Innovation for Frontier Artificial Intelligence Act” (SB 1047)
California’s recently vetoed “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (“SB 1047”) focused on a category that it referred to as “covered models.”[ref 24] The version of SB 1047 passed by the California Senate in May 2024 defined “covered model” to include models meeting either of the following criteria:
(1) The artificial intelligence model was trained using a quantity of computing power greater than 1026 integer or floating-point operations.
(2) The artificial intelligence model was trained using a quantity of computing power sufficiently large that it could reasonably be expected to have similar or greater performance as an artificial intelligence model trained using a quantity of computing power greater than 1026 integer or floating-point operations in 2024 as assessed using benchmarks commonly used to quantify the general performance of state-of-the-art foundation models.[ref 25]
This definition resembles the placeholder definition in EO 14110 in that it primarily consists of a training compute threshold of 1026 FLOP. However, SB 1047 added an alternative capabilities-based threshold to capture future models which “could reasonably be expected” to be as capable as models trained on 1026 FLOP in 2024. This addition was intended to “future-proof”[ref 26] SB 1047 by addressing one of the main disadvantages of training compute thresholds—their tendency to become obsolete over time as advances in algorithmic efficiency produce highly capable models trained on relatively small amounts of compute.[ref 27]
Following pushback from stakeholders who argued that SB 1047 would stifle innovation,[ref 28] the bill was amended repeatedly in the California State Assembly. The final version defined “covered model” in the following way:
(A) Before January 1, 2027, “covered model” means either of the following:
(i) An artificial intelligence model trained using a quantity of computing power greater than 1026 integer or floating-point operations, the cost of which exceeds one hundred million dollars[ref 29] ($100,000,000) when calculated using the average market prices of cloud compute at the start of training as reasonably assessed by the developer.
(ii) An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power equal to or greater than three times 1025 integer or floating-point operations, the cost of which, as reasonably assessed by the developer, exceeds ten million dollars ($10,000,000) if calculated using the average market price of cloud compute at the start of fine-tuning.
(B) (i) Except as provided in clause (ii), on and after January 1, 2027, “covered model” means any of the following:
(I) An artificial intelligence model trained using a quantity of computing power determined by the Government Operations Agency pursuant to Section 11547.6 of the Government Code, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market price of cloud compute at the start of training as reasonably assessed by the developer.
(II) An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power that exceeds a threshold determined by the Government Operations Agency, the cost of which, as reasonably assessed by the developer, exceeds ten million dollars ($10,000,000) if calculated using the average market price of cloud compute at the start of fine-tuning.
(ii) If the Government Operations Agency does not adopt a regulation governing subclauses (I) and (II) of clause (i) before January 1, 2027, the definition of “covered model” in subparagraph (A) shall be operative until the regulation is adopted.
This new definition was more complex than its predecessor. Subsection (A) introduced an initial definition slated to apply until at least 2027, which relied on a training compute threshold of 1026 FLOP paired with a training cost floor of $100,000,000.[ref 30] Subsection (B), in turn, provided for the eventual replacement of the training compute thresholds used in the initial definition with new thresholds to be determined (and presumably updated) by a regulatory agency.
The most significant change in the final version of SB 1047’s definition was the replacement of the capability threshold with a $100,000,000 cost threshold. Because it would currently cost more than $100,000,000 to train a model using >1026 FLOP, the addition of the cost threshold did not change the scope of the definition in the short term. However, the cost of compute has historically fallen precipitously over time in accordance with Moore’s law.[ref 31] This may mean that models trained using significantly more than 1026 FLOP will cost significantly less than the inflation-adjusted equivalent of 100 million 2024 dollars to create at some point in the future.
The old capability threshold expanded the definition of “covered model” because it was an alternative to the compute threshold—models that exceeded either of the two thresholds would have been “covered.” The newer cost threshold, on the other hand, restricted the scope of the definition because it was linked conjunctively to the compute threshold, meaning that only models that exceed both thresholds were covered. In other words, where the May 2024 definition of “covered model” future-proofed itself against the risk of becoming underinclusive by including highly capable low-compute models, the final definition instead guarded against the risk of becoming overinclusive by excluding low-cost models trained on large amounts of compute. Furthermore, the final cost threshold was baked into the bill text and could only have been changed by passing a new statute—unlike the compute threshold, which could have been specified and updated by a regulator.
Compared with the overall definitional scheme in EO 14110, SB 1047’s definition was simpler, easier to operationalize, and less flexible. SB 1047 lacked a broad, high-level risk-based definition like the first definition in EO 14110. SB 1047 did resemble EO 14110 in its use of a “placeholder” definition, but where EO 14110 confers broad discretion on the regulator to choose the “set of technical conditions” that will comprise the regulatory definition, SB 1047 only authorized the regulator to set and adjust the numerical value of the compute thresholds in an otherwise rigid statutory definition.
C. EU Artificial Intelligence Act
The EU AI Act classifies AI systems according to the risks they pose. It prohibits systems that do certain things, such as exploiting the vulnerabilities of elderly or disabled people,[ref 32] and regulates but does not ban so-called “high-risk” systems.[ref 33] While this classification system does not map neatly onto U.S. regulatory efforts, the EU AI Act does include a category conceptually similar to the EO’s “dual-use foundation model”: the “general-purpose AI model with systemic risk.”[ref 34] The statutory definition for this category includes a given general-purpose model[ref 35] if:
a. it has high impact capabilities[ref 36] evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks; [or]
b. based on a decision of the Commission,[ref 37] ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XIII.
Additionally, models are presumed to have “high impact capabilities” if they were trained on >1025 FLOP.[ref 38] The seven “criteria set out in Annex XIII” to be considered in evaluating model capabilities include a variety of technical inputs (such as the model’s number of parameters and the size or quality of the dataset used in training the model), the model’s performance on benchmarks and other capabilities evaluations, and other considerations such as the number of users the model has.[ref 39] When necessary, the European Commission is authorized to amend the compute threshold and “supplement benchmarks and indicators” in response to technological developments, such as “algorithmic improvements or increased hardware efficiency.”[ref 40]
The EU Act definition resembles the initial, broad definition in the EO in that they both take diverse factors like the size and quality of the dataset used to train the model, the number of parameters, and the model’s capabilities into account. However, the EU Act definition is likely much broader than either EO definition. The training compute threshold in the EU Act is sufficient, but not necessary, to classify models as systemically risky, whereas the (much higher) threshold in the EO’s placeholder definition is both necessary and sufficient. And the first EO definition includes only models that exhibit a high level of performance on tasks that pose serious risks to national security, while the EU Act includes all general-purpose models with “high impact capabilities,” which it defines as including any model trained on more than 1025 FLOP.
The EU Act definition resembles the final SB 1047 definition of “covered model” in that both definitions authorize a regulator to update their thresholds in response to changing circumstances. It also resembles SB 1047’s May 2024 definition in that both definitions incorporate a training compute threshold and a capabilities-based element.
IV. Elements of Existing Definitions
As the examples discussed above demonstrate, legal definitions of “frontier model” can consist of one or more of a number of criteria. This section discusses a few of the most promising definitional elements.
A. Technical inputs and characteristics
A definition may classify AI models according to their technical characteristics or the technical inputs used in training the model, such as training compute, parameter count, and dataset size and type. These elements can be used in either statutory or regulatory definitions.
Training compute thresholds are a particularly attractive option for policymakers,[ref 41] as evidenced by the three examples discussed above. “Training compute” refers to the computational power used to train a model, often measured in integer or floating-point operations (OP or FLOP).[ref 42] Training compute thresholds function as a useful proxy for model capabilities because capabilities tend to increase as computational resources used to train the model increase.[ref 43]
One advantage of using a compute threshold is that training compute is a straightforward metric that is quantifiable and can be readily measured, monitored, and verified.[ref 44] Because of these characteristics, determining with high certainty whether a given model exceeds a compute threshold is relatively easy. This, in turn, facilitates enforcement of and compliance with regulations that rely on a compute-based definition. Since the amount of training compute (and other technical inputs) can be estimated prior to the training run,[ref 45] developers can predict whether a model will be covered earlier in development.
One disadvantage of a compute-based definition is that compute thresholds are a proxy for model capabilities, which are in turn a proxy for risk. Definitions that make use of multiple nested layers of proxy terms in this manner are particularly prone to becoming untethered from their original purpose.[ref 46] This can be caused, for example, by the operation of Goodhart’s Law, which suggests that “when a measure becomes a target, it ceases to be a good measure.”[ref 47] Particularly problematic, especially for statutory definitions that are more difficult to update, is the possibility that a compute threshold may become underinclusive over time as improvements in algorithmic efficiency allow for the development of highly capable models trained on below-threshold levels of compute.[ref 48] This possibility is one reason why SB 1047 and the EU AI Act both supplement their compute thresholds with alternative, capabilities-based elements.
In addition to training compute, two other model characteristics correlated with capabilities are the number of model parameters[ref 49] and the size of the dataset on which the model was trained.[ref 50] Either or both of these characteristics can be used as an element of a definition. A definition can also rely on training data characteristics other than size, such as the quality or type of the data used; the placeholder definition in EO 14110, for example, contains a lower compute threshold for models “trained… using primarily biological sequence data.”[ref 51] EO 14110 requires a dual-use foundation model to contain “at least tens of billions of parameters,”[ref 52] and the “number of parameters of the model” is a criteria to be considered under the EU AI Act.[ref 53] EO 14110 specified that only models “trained on broad data” could be dual-use foundation models,[ref 54] and the EU AI Act includes “the quality or size of the data set, for example measured through tokens” as one criterion for determining whether an AI model poses systemic risks.[ref 55]
Dataset size and parameter count share many of the pros and cons of training compute. Like training compute, they are objective metrics that can be measured and verified, and they serve as proxies for model capabilities.[ref 56] Training compute is often considered the best and most reliable proxy of the three, in part because it is the most closely correlated with performance and is difficult to manipulate.[ref 57] However, partially redundant backup metrics can still be useful.[ref 58] Dataset characteristics other than size are typically less quantifiable and harder to measure but are also capable of capturing information that the quantifiable metrics cannot.
B. Capabilities
Frontier models can also be defined in terms of their capabilities. A capabilities-based definition element typically sets a threshold level of competence that a model must achieve to be considered “frontier,” either in one or more specific domains or across a broad range of domains. A capabilities-based definition can provide specific, objective criteria for measuring a model’s capabilities,[ref 59] or it can describe the capabilities required in more general terms and leave the task of evaluation to the discretion of future interpreters.[ref 60] The former approach might be better suited to a regulatory definition, especially if the criteria used will have to be updated frequently, whereas the latter approach would be more typical of a high-level statutory definition.
Basing a definition on capabilities, rather than relying on a proxy for capabilities like training compute, eliminates the risk that the chosen proxy will cease to be a good measure of capabilities over time. Therefore, a capabilities-based definition is more likely than, e.g., a compute threshold to remain robust over time in the face of improvements in algorithmic efficiency. This was the point of the May 2024 version of SB 1047’s use of a capabilities element tethered to a compute threshold (“similar or greater performance as an artificial intelligence model trained using a quantity of computing power greater than 1026 integer or floating-point operations in 2024”)—it was an attempt to capture some of the benefits of an input-based definition while also guarding against the possibility that models trained on less than 1026 FLOP may become far more capable in the future than they are in 2024.
However, capabilities are far more difficult than compute to accurately measure. Whether a model has demonstrated “high levels of performance at tasks that pose a serious risk to security” under the EO’s broad capabilities-based definition is not something that can be determined objectively and to a high degree of certainty like the size of a dataset in tokens or the total FLOP used in a training run. Model capabilities are often measured using benchmarks (standardized sets of tasks or questions),[ref 61] but creating benchmarks that accurately measure the complex and diverse capabilities of general-purpose foundation models[ref 62] is notoriously difficult.[ref 63]
Additionally, model capabilities (unlike the technical inputs discussed above) are generally not measurable until after the model has been trained.[ref 64] This makes it difficult to regulate the development of frontier models using capabilities-based definitions, although post-development, pre-release regulation is still possible.
C. Risk
Some researchers have suggested the possibility of defining frontier AI systems on the basis of the risks they pose to users or to public safety instead of or in addition to relying on a proxy metric, like capabilities, or a proxy for a proxy, such as compute.[ref 65] The principal advantage of this direct approach is that it can, in theory, allow for better-targeted regulations—for instance, by allowing a definition to exclude highly capable but demonstrably low-risk models. The principal disadvantage is that measuring risk is even more difficult than measuring capabilities.[ref 66] The science of designing rigorous safety evaluations for foundation models is still in its infancy.[ref 67]
Of the three real-world measures discussed in Section III, only EO 14110 mentions risk directly. The broad initial definition of “dual-use foundation model” includes models that exhibit “high levels of performance at tasks that pose a serious risk to security,” such as “enabling powerful offensive cyber operations through automated vulnerability discovery” or making it easier for non-experts to design chemical weapons. This is a capability threshold combined with a risk threshold; the tasks at which a dual-use foundation model must be highly capable are those that pose a “serious risk” to security, national economic security, and/or national public health or safety. As EO 14110 shows, risk-based definition elements can specify the type of risk that a frontier model must create instead of addressing the severity of the risks created.
D. Epistemic elements
One of the primary justifications for recognizing a category of “frontier models” is the likelihood that broadly capable AI models that are more advanced than previous generations of models will have capabilities and applications that are not readily predictable ex ante.[ref 68] As the word “frontier” implies, lawmakers and regulators focusing on frontier models are interested in targeting models that break new ground and push into the unknown.[ref 69] This was, at least in part, the reason for the inclusion of training compute thresholds of 1026 FLOP in EO 14110 and SB 1047—since the most capable current models were trained on 5×1025 or fewer FLOP,[ref 70] a model trained on 1026 FLOP would represent a significant step forward into uncharted territory.
While it is possible to target models that advance the state of the art by setting and adjusting capability or compute thresholds, a more direct alternative approach would be to include an epistemic element in a statutory definition of “frontier model.” An epistemic element would distinguish between “known” and “unknown” models, i.e., between well-understood models that pose only known risks and poorly understood models that may pose unfamiliar and unpredictable risks.[ref 71]
This kind of distinction between known and unknown risks has a long history in U.S. regulation.[ref 72] For instance, the Toxic Substances Control Act (TSCA) prohibits the manufacturing of any “new chemical substance” without a license.[ref 73] The EPA keeps and regularly updates a list of chemical substances which are or have been manufactured in the U.S., and any substance not included on this list is “new” by definition.[ref 74] In other words, the TSCA distinguishes between chemicals (including potentially dangerous chemicals) that are familiar to regulators and unfamiliar chemicals that pose unknown risks.
One advantage of an epistemic element is that it allows a regulator to address “unknown unknowns” separately from better-understood risks that can be evaluated and mitigated more precisely.[ref 75] Additionally, the scope of an epistemic definition, unlike that of most input- and capability-based definitions, would change over time as regulators became familiar with the capabilities of and risks posed by new models.[ref 76] Models would drop out of the “frontier” category once regulators became sufficiently familiar with their capabilities and risks.[ref 77] Like a capabilities- or risk-based definition, however, an epistemic definition might be difficult to operationalize.[ref 78] To determine whether a given model was “frontier” under an epistemic definition, it would probably be necessary to either rely on a proxy for unknown capabilities or authorize a regulator to categorize eligible models according to a specified process.[ref 79]
E. Deployment context
The context in which an AI system is deployed can serve as an element in a definition. The EU AI Act, for example, takes the number of registered end users and the number of registered EU business users a model has into account as factors to be considered in determining whether a model is a “general-purpose AI model with systemic risk.”[ref 80] Deployment context typically does not in and of itself provide enough information about the risks posed by a model to function as a stand-alone definitional element, but it can be a useful proxy for the kind of risk posed by a given model. Some models may cause harms in proportion to their number of users, and the justification for aggressively regulating these models grows stronger the more users they have. A model that will only be used by government agencies, or by the military, creates a different set of risks than a model that is made available to the general public.
V. Updating Regulatory Definitions
A recurring theme in the scholarly literature on the regulation of emerging technologies is the importance of regulatory flexibility.[ref 81] Because of the rapid pace of technological progress, legal rules designed to govern emerging technologies like AI tend to quickly become outdated and ineffective if they cannot be rapidly and frequently updated in response to changing circumstances.[ref 82] For this reason, it may be desirable to authorize an executive agency to promulgate and update a regulatory definition of “frontier model,” since regulatory definitions can typically be updated more frequently and more easily than statutory definitions under U.S. law.[ref 83]
Historically, failing to quickly update regulatory definitions in the context of emerging technologies has often led to the definitions becoming obsolete or counterproductive. For example, U.S. export controls on supercomputers in the 1990s and early 2000s defined “supercomputer” in terms of the number of millions of theoretical operations per second (MTOPS) the computer could perform.[ref 84] Rapid advances in the processing power of commercially available computers soon rendered the initial definition obsolete, however, and the Clinton administration was forced to revise the MTOPS threshold repeatedly to avoid harming the competitiveness of the American computer industry.[ref 85] Eventually, the MTOPS metric itself was rendered obsolete, leading to a period of several years in which supercomputer export controls were ineffective at best.[ref 86]
There are a number of legal considerations that may prevent an agency from quickly updating a regulatory definition and a number of measures that can be taken to streamline the process. One important aspect of the rulemaking process is the Administrative Procedure Act’s “notice and comment” requirement.[ref 87] In order to satisfy this requirement, agencies are generally obligated to publish notice of any proposed amendment to an existing regulation in the Federal Register, allow time for the public to comment on the proposal, respond to public comments, publish a final version of the new rule, and then allow at least 30–60 days before the rule goes into effect.[ref 88] From the beginning of the notice-and-comment process to the publication of a final rule, this process can take anywhere from several months to several years.[ref 89] However, an agency can waive the 30–60 day publication period or even the entire notice-and-comment requirement for “good cause” if observing the standard procedures would be “impracticable, unnecessary, or contrary to the public interest.”[ref 90] Of course, the notice-and-comment process has benefits as well as costs; public input can be substantively valuable and informative for agencies, and also increases the democratic accountability of agencies and the transparency of the rulemaking process. In certain circumstances, however, the costs of delay can outweigh the benefits. U.S. agencies have occasionally demonstrated a willingness to waive procedural rulemaking requirements in order to respond to emergency AI-related developments. The Bureau of Industry and Security (“BIS”), for example, waived the normal 30-day waiting period for an interim rule prohibiting the sale of certain advanced AI-relevant chips to China in October 2023.[ref 91]
Another way to encourage quick updating for regulatory definitions is for Congress to statutorily authorize agencies to eschew or limit the length of notice and comment, or to compel agencies to promulgate a final rule by a specified deadline.[ref 92] Because notice and comment is a statutory requirement, it can be adjusted as necessary by statute.
For regulations exceeding a certain threshold of economic significance, another substantial source of delay is OIRA review. OIRA, the Office of Information and Regulatory Affairs, is an office within the White House that oversees interagency coordination and undertakes centralized cost-benefit analysis of important regulations.[ref 93] Like notice and comment, OIRA review can have significant benefits—such as improving the quality of regulations and facilitating interagency cooperation—but it also delays the implementation of significant rules, typically by several months.[ref 94] OIRA review can be waived either by statutory mandate or by OIRA itself.[ref 95]
VI. Deference, Delegation, and Regulatory Definitions
Recent developments in U.S. administrative law may make it more difficult for Congress to effectively delegate the task of defining “frontier model” to a regulatory agency. A number of recent Supreme Court cases signal an ongoing shift in U.S. administrative law doctrine intended to limit congressional delegations of rulemaking authority.[ref 96] Whether this development is good or bad on net is a matter of perspective; libertarian-minded observers who believe that the U.S. has too many legal rules already[ref 97] and that overregulation is a bigger problem than underregulation have welcomed the change,[ref 98] while pro-regulation observers predict that it will significantly reduce the regulatory capacity of agencies in a number of important areas.[ref 99]
Regardless of where one falls on that spectrum of opinion, the relevant takeaway for efforts to define “frontier model” is that it will likely become somewhat more difficult for agencies to promulgate and update regulatory definitions without a clear statutory authorization to do so. If Congress still wishes to authorize the creation of regulatory definitions, however, it can protect agency definitions from legal challenges by clearly and explicitly authorizing agencies to exercise discretion in promulgating and updating definitions of specific terms.
A. Loper Bright and deference to agency interpretations
In a recent decision in the combined cases of Loper Bright Enterprises v. Raimondo and Relentless v. Department of Commerce, the Supreme Court repealed a longstanding legal doctrine known as Chevron deference.[ref 100] Under Chevron, federal courts were required to defer to certain agency interpretations of federal statutes when (1) the relevant part of the statute being interpreted was genuinely ambiguous and (2) the agency’s interpretation was reasonable. After Loper Bright, courts are no longer required to defer to these interpretations—instead, under a doctrine known as Skidmore deference,[ref 101] agency interpretations will prevail in court only to the extent that courts are persuaded by them.[ref 102]
Justice Elena Kagan’s dissenting opinion in Loper Bright argues that the decision will harm the regulatory capacity of agencies by reducing the ability of agency subject-matter experts to promulgate regulatory definitions of ambiguous statutory phrases in “scientific or technical” areas.[ref 103] The dissent specifically warns that, after Loper Bright, courts will “play a commanding role” in resolving questions like “[w]hat rules are going to constrain the development of A.I.?”[ref 104]
Justice Kagan’s dissent probably somewhat overstates the significance of Loper Bright to AI governance for rhetorical effect.[ref 105] The end of Chevron deference does not mean that Congress has completely lost the ability to authorize regulatory definitions; where Congress has explicitly directed an agency to define a specific statutory term, Loper Bright will not prevent the agency from doing so.[ref 106] An agency’s authority to promulgate a regulatory definition under a statute resembling EO 14110, which explicitly directs the Department of Commerce to define “dual-use foundation model,” would likely be unaffected. However, Loper Bright has created a great deal of uncertainty regarding the extent to which courts will accept agency claims that Congress has implicitly authorized the creation of regulatory definitions.[ref 107]
To better understand how this uncertainty might affect efforts to define “frontier model,” consider the following real-life example. The Energy Policy and Conservation Act (“EPCA”) includes a statutory definition of the term “small electric motor.”[ref 108] Like many statutory definitions, however, this definition is not detailed enough to resolve all disputes about whether a given product is or is not a “small electric motor” for purposes of EPCA. In 2010, the Department of Energy (“DOE”), which is authorized under EPCA to promulgate energy efficiency standards governing “small electric motors,”[ref 109] issued a regulatory definition of “small electric motor” specifying that the term referred to motors with power outputs between 0.25 and 3 horsepower.[ref 110] The National Electrical Manufacturers Association (“NEMA”), a trade association of electronics manufacturers, sued to challenge the rule, arguing that motors with between 1 and 3 horsepower were too powerful to be “small electric motors” and that the DOE was exceeding its statutory authority by attempting to regulate them.[ref 111]
In a 2011 opinion that utilized the Chevron framework, the federal court that decided NEMA’s lawsuit considered the language of EPCA’s statutory definition and concluded that EPCA was ambiguous as to whether motors with between 1 and 3 horsepower could be “small electric motors.”[ref 112] The court then found that the DOE’s regulatory definition was a reasonable interpretation of EPCA’s statutory definition, deferred to the DOE under Chevron, and upheld the challenged regulation.[ref 113]
Under Chevron, federal courts were required to assume that Congress had implicitly authorized agencies like the DOE to resolve ambiguities in a statute, as the DOE did in 2010 by promulgating its regulatory definition of “small electric motor.” After Loper Bright, courts will recognize fewer implicit delegations of definition-making authority. For instance, while EPCA requires the DOE to prescribe “testing requirements” and “energy conservation standards” for small electric motors, it does not explicitly authorize the DOE to promulgate a regulatory definition of “small electric motor.” If a rule like the one challenged by NEMA were challenged today, the DOE could still argue that Congress implicitly authorized the creation of such a rule by giving the DOE authority to prescribe standards and testing requirements—but such an argument would probably be less likely to succeed than the Chevron argument that saved the rule in 2011.
Today, a court that did not find an implicit delegation of rulemaking authority in EPCA would not defer to the DOE’s interpretation. Instead, the court would simply compare the DOE’s regulatory definition of “small electric motor” with NEMA’s proposed definition and decide which of the two was a more faithful interpretation of EPCA’s statutory definition.[ref 114] Similarly, when or if some future federal statute uses the phrase “frontier model” or any analogous term, agency attempts to operationalize the statute by enacting detailed regulatory definitions that are not explicitly authorized by the statute will be easier to challenge after Loper Bright than they would have been under Chevron.
Congress can avoid Loper Bright issues by using clear and explicit statutory language to authorize agencies to promulgate and update regulatory definitions of “frontier model” or analogous phrases. However, it is often difficult to predict in advance whether or how a statutory definition will become ambiguous over time. This is especially true in the context of emerging technologies like AI, where the rapid pace of technological development and the poorly understood nature of the technology often eventually render carefully crafted definitions obsolete.[ref 115]
Suppose, for example, that a federal statute resembling the May 2024 draft of SB 1047 was enacted. The statutory definition would include future models trained on a quantity of compute such that they “could reasonably be expected to have similar or greater performance as an artificial intelligence model trained using [>1026 FLOP] in 2024.” If the statute did not contain an explicit authorization for some agency to determine the quantity of compute that qualified in a given year, any attempt to set and enforce updated regulatory compute thresholds could be challenged in court.
The enforcing agency could argue that the statute included an implied authorization for the agency to promulgate and update the regulatory definitions at issue. This argument might succeed or fail, depending on the language of the statute, the nature of the challenged regulatory definitions, and the judicial philosophy of the deciding court. But regardless of the outcome of any individual case, challenges to impliedly authorized regulatory definitions will probably be more likely to succeed after Loper Bright than they would have been under Chevron. Perhaps more importantly, agencies will be aware that regulatory definitions will no longer receive the benefit of Chevron deference and may regulate more cautiously in order to avoid being sued.[ref 116] Moreover, even if the statute did explicitly authorize an agency to issue updated compute thresholds, such an authorization might not allow the agency to respond to future technological breakthroughs by considering some factor other than the quantity of training compute used.
In other words, a narrow congressional authorization to regulatorily define “frontier model” may prove insufficiently flexible after Loper Bright. Congress could attempt to address this possibility by instead enacting a very broad authorization.[ref 117] An overly broad definition, however, may be undesirable for reasons of democratic accountability, as it would give unelected agency officials discretionary control over which models to regulate as “frontier.” Moreover, an overly broad definition might risk running afoul of two related constitutional doctrines that limit the ability of Congress to delegate rulemaking authority to agencies—the major questions doctrine and the nondelegation doctrine.
B. The nondelegation doctrine
Under the nondelegation doctrine, which arises from the constitutional principle of separation of powers, Congress may not constitutionally delegate legislative power to executive branch agencies. In its current form, this doctrine has little relevance to efforts to define “frontier model.” Under current law, Congress can validly delegate rulemaking authority to an agency as long as the statute in which the delegation occurs includes an “intelligible principle” that provides adequate guidance for the exercise of that authority.[ref 118] In practice, this is an easy standard to satisfy—even vague and general legislative guidance, such as directing agencies to regulate in a way that “will be generally fair and equitable and will effectuate the purposes of the Act,” has been held to contain an intelligible principle.[ref 119] The Supreme Court has used the nondelegation doctrine to strike down statutes only twice, in two 1935 decisions invalidating sweeping New Deal laws.[ref 120]
However, some commentators have suggested that the Supreme Court may revisit the nondelegation doctrine in the near future,[ref 121] perhaps by discarding the “intelligible principle” test in favor of something like the standard suggested by Justice Gorsuch in his 2019 dissent in Gundy v. United States.[ref 122] In Gundy, Justice Gorsuch suggested that the nondelegation doctrine, properly understood, requires Congress to make “all the relevant policy decisions” and delegate to agencies only the task of “filling up the details” via regulation.[ref 123]
Therefore, if the Supreme Court does significantly strengthen the nondelegation doctrine, it is possible that a statute authorizing an agency to create a regulatory definition of “frontier model” would need to include meaningful guidance as to what the definition should look like. This is most likely to be the case if the regulatory definition in question is a key part of an extremely significant regulatory scheme, because “the degree of agency discretion that is acceptable varies according to the power congressionally conferred.”[ref 124] Congress generally “need not provide any direction” to agencies regarding the manner in which it defines specific and relatively unimportant technical terms,[ref 125] but must provide “substantial guidance” for extremely important and complex regulatory tasks that could significantly impact the national economy.[ref 126]
C. The major questions doctrine
Like the nondelegation doctrine, the major questions doctrine is a constitutional limitation on Congress’s ability to delegate rulemaking power to agencies. Like the nondelegation doctrine, it addresses concerns about the separation of powers and the increasingly prominent role executive branch agencies have taken on in the creation of important legal rules. Unlike the nondelegation doctrine, however, the major questions doctrine is a recent innovation. The Supreme Court acknowledged it by name for the first time in the 2022 case West Virginia v. Environmental Protection Agency,[ref 127] where it was used to strike down an EPA rule regulating power plant carbon dioxide emissions. Essentially, the major questions doctrine provides that courts will not accept an interpretation of a statute that grants an agency authority over a matter of great “economic or political significance” unless there is a “clear congressional authorization” for the claimed authority.[ref 128] Whereas the nondelegation doctrine provides a way to strike down statutes as unconstitutional, the major questions doctrine only affects the way that statutes are interpreted.
Supporters of the major questions doctrine argue that it helps to rein in excessively broad delegations of legislative power to the administrative state and serves a useful separation-of-powers function. The doctrine’s critics, however, have argued that it limits Congress’s ability to set up flexible regulatory regimes that allow agencies to respond quickly and decisively to changing circumstances.[ref 129] According to this school of thought, requiring a clear statement authorizing each economically significant agency action inhibits Congress’s ability to communicate broad discretion in handling problems that are difficult to foresee in advance.
This difficulty is particularly salient in the context of regulatory regimes for the governance of emerging technologies.[ref 130] Justice Kagan made this point in her dissent from the majority opinion in West Virginia, where she argued that the statute at issue was broadly worded because Congress had known that “without regulatory flexibility, changing circumstances and scientific developments would soon render the Clean Air Act obsolete.”[ref 131] Because advanced AI systems are likely to have a significant impact on the U.S. economy in the coming years,[ref 132] it is plausible that the task of choosing which systems should be categorized as “frontier” and subject to increased regulatory scrutiny will be an issue of great “economic and political significance.” If it is, then the major questions doctrine could be invoked to invalidate agency efforts to promulgate or amend a definition of “frontier model” to address previously unforeseen unsafe capabilities.
For example, consider a hypothetical federal statute instituting a licensing regime for frontier models that includes a definition similar to the placeholder in EO 14110 (empowering the Bureau of Industry and Security to “define, and thereafter update as needed on a regular basis, the set of technical conditions [that determine whether a model is a frontier model].”). Suppose that BIS initially defined “dual-use foundation model” under this statute using a regularly updated compute threshold, but that ten years after the statute’s enactment a new kind of AI system was developed that could be trained to exhibit cutting-edge capabilities using a relatively small quantity of training compute. If BIS attempted to amend its regulatory definition of “frontier model” to include a capabilities threshold that would cover this newly developed and economically significant category of AI system, that new regulatory definition might be challenged under the major questions doctrine. In that situation, a court with deregulatory inclinations might not view the broad congressional authorization for BIS to define “frontier model” as a sufficiently clear statement of congressional intent to allow BIS to later institute a new and expanded licensing regime based on less objective technical criteria.[ref 133]
VI. Conclusion
One of the most common mistakes that nonlawyers make when reading a statute or regulation is to assume that each word of the text carries its ordinary English meaning. This error occurs because legal rules, unlike most writing encountered in everyday life, are often written in a sort of simple code where a number of the terms in a given sentence are actually stand-ins for much longer phrases catalogued elsewhere in a “definitions” section.
This tendency to overlook the role that definitions play in legal rules has an analogue in a widespread tendency to overlook the importance of well-crafted definitions to a regulatory scheme. The object of this paper, therefore, has been to explain some of the key legal considerations relevant to the task of defining “frontier model” or any of the analogous phrases used in existing laws and regulations.
One such consideration is the role that should be played by statutory and regulatory definitions, which can be used independently or in conjunction with each other to create a definition that is both technically sound and democratically legitimate. Another is the selection and combination of potential definitional elements, including technical inputs, capabilities metrics, risk, deployment context, and familiarity, that can be used independently or in conjunction with each other to create a single statutory or regulatory definition. Legal mechanisms for facilitating rapid and frequent updating for regulations targeting emerging technologies also merit attention. Finally, the nondelegation and major questions doctrines and the recent elimination of Chevron deference may affect the scope of discretion that can be conferred for the creation and updating of regulatory definitions.
Beyond a piecemeal approach: prospects for a framework convention on AI
Abstract
Solving many of the challenges presented by artificial intelligence (AI) requires international coordination and cooperation. In response, the past years have seen multiple global initiatives to govern AI. However, very few proposals have discussed treaty models or design for AI governance and have therefore neglected the study of framework conventions–generally multilateral law-making treaties that establish a two-step regulatory process through which initially underspecified obligations and implementation mechanisms are subsequently specified via protocols. This chapter asks whether or how a Framework Convention on AI (FCAI) might serve as a regulatory tool for global AI governance, in contrast with the more traditional piecemeal approach based on individual treaties that govern isolated issues and have no subsequent regime. To answer these questions, the chapter first briefly sets out the recent context of global AI governance, and the governance gaps that remain to be filled. It then explores the elements, definition, and general role of framework conventions as an international regulatory instrument. On this basis, the chapter considers the structural trade-offs and challenges that an FCAI would face, before discussing key ways in which it could be designed to address these concerns. We argue that, while imperfect, an FCAI may be the most tractable and appropriate solution for the international governance of AI if it follows a hybrid model that combines a wide scope with specific obligations and implementation mechanisms concerning issues on which states already converge.
The future of international scientific assessments of AI’s risks
Abstract
Effective international coordination to address AI’s global impacts demands a shared, scientifically rigorous understanding of AI risks. This paper examines the challenges and opportunities in establishing international scientific consensus in this domain. It analyzes current efforts, including the UK-led International Scientific Report on the Safety of Advanced AI and emerging UN initiatives, identifying key limitations and tradeoffs. The authors propose a two-track approach: 1) a UN-led process focusing on broad AI issues and engaging member states, and 2) an independent annual report specifically focused on advanced AI risks. The paper recommends careful coordination between these efforts to leverage their respective strengths while maintaining their independence. It also evaluates potential hosts for the independent report, including the network of AI Safety Institutes, the OECD, and scientific organizations like the International Science Council. The proposed framework aims to balance scientific rigor, political legitimacy, and timely action to facilitate coordinated international action on AI risks.
The limits of liability
I’m probably as optimistic as anyone about the role that liability can play in AI governance. Indeed, as I’ll argue in a forthcoming article, I think it should be the centerpiece of our AI governance regime. But it’s important to recognize its limits.
First and foremost, liability alone is not an effective tool for solving public good problems. This means it is poorly positioned to address at least some challenges presented by advanced AI. Liability is principally a tool for addressing risk externalities generated by training and deploying advanced AI systems. That is, AI developers and their customers largely capture the benefits of increasing AI capabilities, but most of the risk is borne by third parties who have no choice in the matter. This is the primary market failure associated with AI risk, but it’s not the only one. There is also a public good problem with AI alignment and safety research. Like most information goods, advances in alignment and safety research are non-rival (you and I can both use the same idea, without leaving less for the other) and non-excludable (once you come up with an idea, it’s hard to use it without the secret getting out). Markets generally underprovide public goods, and AI safety research is no exception. Plausible policy interventions to address this problem include prizes and other forms of public subsidies. Private philanthropy can also continue to play an important role in supporting alignment and safety research. There may also be winner-take-all race dynamics that generate market distortions not fully captured by the risk externality and public goods problems.
Second, there are some plausible AI risk externalities that liability cannot realistically address, especially those involving structural harms or highly attenuated causal chains. For instance, if AI systems are used to spread misinformation or interfere with elections, this is unlikely to give rise to a liability claim. To the extent that AI raises novel issues in those domains, other policy ideas may be needed. Similarly, some ways of contributing to the risk of harm are too attenuated to trigger liability claims. For example, if the developer of a frontier or near-frontier model releases information about the model and its training data/process that enables lagging labs to move closer to the frontier, this could induce leading labs to move faster and exercise less caution. But it would not be appropriate or feasible to use liability tools to hold the first lab responsible for the downstream harms from this race dynamic.
Liability also has trouble handling uninsurable risks— those that might cause harms so large that a compensatory damages award would not be practically enforceable — if warning shots are unlikely. In my recent paper laying out a tort liability framework for mitigating catastrophic AI risk, I argue that uninsurable risks more broadly can be addressed using liability by applying punitive damages in “near miss” cases of practically compensable harm that are associated with the uninsurable risk. But if some uninsurable risks are unlikely to produce warning shots, then this indirect liability mechanism would not work to mitigate them. And if the uninsurable risk is realized, the harm would be too large to make a compensatory damages judgment practically enforceable. That means AI developers and deployers would have inadequate incentives to mitigate those risks.
Like most forms of domestic AI regulation, unilateral imposition of a strong liability framework is also subject to regulatory arbitrage. If the liability framework is sufficiently binding, AI development may shift to jurisdictions that don’t impose strong liability policies or comparably onerous regulations. While foreign AI developers would still be subject to liability if they harm people in countries with strong liability regimes, it may prove difficult to enforce those judgments if the developer lacks substantial assets in the country where the injuries occur. One potential solution to this problem is international treaties establishing reciprocal enforcement of liability judgments reached by the other country’s courts.
Finally, liability is a weak tool for influencing the conduct of governmental actors. By default, many governments will be shielded from liability, and many legislative proposals will continue to exempt government entities. Even if governments waive sovereign immunity for AI harms they are responsible for, the prospect of liability is unlikely to sway the decisions of government officials, who are more responsive to political than economic incentives. This means liability is a weak tool in scenarios where the major AI labs get nationalized as the technology gets more powerful. But even if AI research and development remains largely in the private sector, the use of AI by government officials will be poorly constrained by liability. Ideas like law-following AI are likely to be needed to constrain governmental AI deployment.
Existing authorities for oversight of frontier AI models
Abstract
It has been suggested that a national frontier AI governance strategy should include a comprehensive regime for tracking and licensing the creation and dissemination of frontier models and critical hardware components (“AI Oversight”). A robust Oversight regime would almost certainly require new legislation. In the absence of new legislation, however, it might be possible to accomplish some of the goals of an AI Oversight regime using existing legal authorities. This memorandum discusses a number of existing authorities in order of their likely utility for AI Oversight. The existing authorities that appear to be particularly promising include the Defense Production Act, the Export Administration Regulations, the International Emergency Economic Powers Act, the use of federal funding conditions, and Federal Trade Commission consumer protection authorities. Somewhat less promising authorities discussed in the memo include § 606(c) of the Communications Act of 1934, Committee on Foreign Investment in the United States review, the Atomic Energy Act, copyright and antitrust laws, the Biological Weapons Anti-Terrorism Act, the Chemical Weapons Convention Implementation Act, and the Federal Select Agent Program.
It has been suggested that frontier artificial intelligence (“AI”) models may in the near future pose serious risks to the national security of the United States—for example, by allowing terrorist groups or hostile foreign state actors to acquire chemical, biological, or nuclear weapons, spread dangerously compelling personalized misinformation on a grand scale, or execute devastating cyberattacks on critical infrastructure. Wise regulation of frontier models is, therefore, a national security imperative, and has been recognized as such by leading figures in academia,[ref 1] industry,[ref 2] and government.[ref 3]
One promising strategy for governance of potentially dangerous frontier models is “AI Oversight.” AI Oversight is defined as a comprehensive regulatory regime allowing the U.S. government to:
1) Track and license hardware for making frontier AI systems (“AI Hardware”)
2) Track and license the creation of frontier AI systems (“AI Creation”), and
3) License the dissemination of frontier AI systems (“AI Proliferation”).
Implementation of a comprehensive AI Oversight regime will likely require substantial new legislation. Substantial new federal AI governance legislation, however, may be many months or even years away. In the immediate and near-term future, therefore, government Oversight of AI Hardware, Creation, and Proliferation will have to rely on existing legal authorities. Of course, tremendously significant regulatory regimes, such as a comprehensive licensing program for a transformative new technology, are not typically—and, in the vast majority of cases, should not be—created by executive fiat without any congressional input. In other words, the short answer to the question of whether AI Oversight can be accomplished using existing authorities is “no.” The remainder of this memorandum attempts to lay out the long answer. Despite the fact that a complete and effective Oversight regime based solely on existing authorities is an unlikely prospect, a broad survey of the authorities that could in theory contribute to such a regime may prove informative to AI governance researchers, legal scholars, and policymakers. In the interests of casting a wide net and giving the most complete possible picture of all plausible or semi-plausible existing authorities for Oversight, the included authorities were intentionally selected with an eye towards erring on the side of overinclusiveness. Therefore, this memo includes some authorities which are unlikely to be used, authorities which would only indirectly or partially contribute to Oversight, and authorities which would likely face serious legal challenges if used in the manner proposed.
Each of the eleven sections below discusses one or more existing authorities that could be used for Oversight and evaluates the authority’s likely relevance. The sections are listed in descending order of evaluated relevance, with the more important and realistic authorities coming first and the more speculative or tangentially relevant authorities bringing up the rear. Some of the authorities discussed are “shovel-ready” and could be put into action immediately, while others would require some agency action, up to and including the promulgation of new regulations (but not new legislation), before being used in the manner suggested.
Included at the beginning of each Section are two bullet points addressing the aspects of Oversight to which each authority might contribute and a rough estimation of the authority’s likelihood of use for Oversight. No estimation of the likelihood that a given authority’s use could be successfully legally challenged is provided, because the outcome of a hypothetical lawsuit would depend too heavily on the details of the authority’s implementation for such an estimate to be useful.[ref 4] The likelihood of use is communicated in terms of rough estimations of likelihood (“reasonably likely,” “unlikely,” etc.) rather than, e.g., percentages, in order to avoid giving a false impression of confidence, given that predicting whether a given authority will be used even in the relatively short term is quite difficult.
The table below contains a brief description of each of the authorities discussed along with the aspects of Oversight to which they may prove relevant and the likelihood of their use for Oversight.
Defense Production Act
- Potentially applicable to: Licensing AI Hardware, Creation, and Proliferation; Tracking AI Hardware and Creation.
- Already being used to track AI Creation; reasonably likely to be used again in the future in some additional AI Oversight capacity.
The Defense Production Act (“DPA”)[ref 5] authorizes the President to take a broad range of actions to influence domestic industry in the interests of the “national defense.”[ref 6] The DPA was first enacted during the Korean War and was initially used solely for purposes directly related to defense industry production. The DPA has since been reenacted a number of times—most recently in 2019, for a six-year period expiring in September 2025—and the statutory definition of “national defense” has been repeatedly expanded by Congress.[ref 7] Today DPA authorities can be used to address and prepare for a variety of national emergencies.[ref 8] The DPA was originally enacted with seven Titles, four of which have since been allowed to lapse. The remaining Titles—I, III, and VII—furnish the executive branch with a number of authorities which could be used to regulate AI hardware, creation, and proliferation.
Invocation of the DPA’s information-gathering authority in Executive Order 14110
Executive Order 14110 relies on the DPA in § 4.2, “Ensuring Safe and Reliable AI.”[ref 9] Section 4.2 orders the Department of Commerce to require companies “developing or demonstrating an intent to develop dual-use foundation models” to “provide the Federal Government, on an ongoing basis, with information, reports, or records” regarding (a) development and training of dual-use foundation models and security measures taken to ensure the integrity of any such training; (b) ownership and possession of the model weights of any dual-use foundation models and security measures taken to protect said weights; and (c) the results of any dual-use foundation model’s performance in red-teaming exercises.[ref 10] The text of the EO does not specify which provision(s) of the DPA are being invoked, but based on the language of EO § 4.2[ref 11] and on subsequent statements from the agency charged with implementing EO § 4.2[ref 12] the principal relevant provision appears to be § 705, from Title VII of the DPA.[ref 13] According to social media statements by official Department of Commerce accounts, Commerce began requiring companies to “report vital information to the Commerce Department — especially AI safety test results.,” no later than January 29, 2024.[ref 14] However, no further details about the reporting requirements have been made public and no proposed rules or notices relating to the reporting requirements have been issued publicly as of the writing of this memorandum.[ref 15] Section 705 grants the President broad authority to collect information in order to further national defense interests,[ref 16] which authority has been delegated to the Department of Commerce pursuant to E.O. 13603.[ref 17]
Section 705 authorizes the President to obtain information “by regulation, subpoena, or otherwise,” as the President deems necessary or appropriate to enforce or administer the Defense Production Act. In theory, this authority could be relied upon to justify a broad range of government efforts to track AI Hardware and Creation. Historically, § 705 has most often been used by the Department of Commerce’s Bureau of Industry and Security (“BIS”) to conduct “industrial base assessment” surveys of specific defense-relevant industries.[ref 18] For instance, BIS recently prepared an “Assessment of the Critical Supply Chains Supporting the U.S. Information and Communications Technology Industry” which concluded in February 2022.[ref 19] BIS last conducted an assessment of the U.S. artificial intelligence sector in 1994.[ref 20]
Republican elected officials, libertarian commentators, and some tech industry lobbying groups have questioned the legality of EO 14110’s use of the DPA and raised the possibility of a legal challenge.[ref 21] As no such lawsuit has yet been filed, it is difficult to evaluate § 4.2’s chances of surviving hypothetical future legal challenges. The arguments against its legality that have been publicly advanced—such as that the “Defense Production Act is about production… not restriction”[ref 22] and that AI does not present a “national emergency”[ref 23]—are legally dubious, in this author’s opinion.[ref 24] However, § 705 of the DPA has historically been used mostly to conduct “industrial base assessments,” i.e., surveys to collect information about defense-relevant industries.[ref 25] When the DPA was reauthorized in 1992, President George H.W. Bush remarked that using § 705 during peacetime to collect industrial base data from American companies would “intrude inappropriately into the lives of Americans who own and work in the Nation’s businesses.”[ref 26] While that observation is not in any sense legally binding, it does tend to show that EO 14110’s aggressive use of § 705 during peacetime is unusual by historical standards and presents potentially troubling issues relating to executive overreach. The fact that companies are apparently to be required to report on an indefinitely “ongoing basis”[ref 27] is also unusual, as past industrial base surveys have been snapshots of an industry’s condition at a particular time rather than semipermanent ongoing information-gathering institutions.
DPA Title VII: voluntary agreements and recruiting talent
Title VII includes a variety of provisions in addition to § 705, a few of which are potentially relevant to AI Oversight. Section 708 of the DPA authorizes the President to “consult with representatives of industry, business, financing, agriculture, labor, and other interests in order to provide for the making by such persons, with the approval of the President, of voluntary agreements and plans of action to help provide for the national defense.”[ref 28] Section 708 provides an affirmative defense against any civil or criminal antitrust suit for all actions taken in furtherance of a presidentially sanctioned voluntary agreement.[ref 29] This authority could be used to further the kind of cooperation between labs on safety-related issues that has not happened to date because of labs’ fear of antitrust enforcement.[ref 30] Cooperation between private interests in the AI industry could facilitate, for example, information-sharing regarding potential dangerous capabilities, joint AI safety research ventures, voluntary agreements to abide by shared safety standards, and voluntary agreements to pause or set an agreed pace for increases in the size of training runs for frontier AI models.[ref 31] This kind of cooperation could facilitate an effective voluntary pseudo-licensing regime in the absence of new legislation.
Sections 703 and 710 of the DPA could provide effective tools for recruiting talent for government AI roles. Under § 703, agency heads can hire individuals outside of the competitive civil service system and pay them enhanced salaries.[ref 32] Under § 710, the head of any governmental department or agency can establish and train a National Defense Executive Reserve (“NDER”) of individuals held in reserve “for employment in executive positions in Government during periods of national defense emergency.”[ref 33] Currently, there are no active NDER units, and the program has been considered something of a failure because of underfunding and mismanagement since the Cold War,[ref 34] but the statutory authority to create NDER units still exists and could be utilized if top AI researchers and engineers were willing to volunteer for NDER roles. Both §§ 703 and 710 could indirectly facilitate tracking and licensing by allowing information-gathering agencies like BIS or agencies charged with administering a licensing regime to hire expert personnel more easily.
DPA Title I: priorities and allocations authorities
Title I of the DPA empowers the President to require private U.S. companies to prioritize certain contracts in order to “promote the national defense.” Additionally, Title I purports to authorize the President to “allocate materials, services, and facilities” in any way he deems necessary or appropriate to promote the national defense.[ref 35] These so-called “priorities” and “allocations” authorities have been delegated to six federal agencies pursuant to Executive Order 13603.[ref 36] The use of these authorities is governed by a set of regulations known as the Defense Priorities and Allocations System (“DPAS”),[ref 37] which is administered by BIS.[ref 38] Under the DPAS, contracts can be assigned one of two priority ratings, “DO” or “DX.”[ref 39] All priority-rated contracts take precedence over all non-rated contracts, and DX contracts take priority over DO contracts.[ref 40]
Because the DPA defines the phrase “national defense” expansively,[ref 41] the text of Title I can be interpreted to authorize a broad range of executive actions relevant to AI governance. For example, it has been suggested that the priorities authority could be used to prioritize government access to cloud-compute resources in times of crisis[ref 42] or to compel semiconductor companies to prioritize government contracts for chips over preexisting contracts with private buyers.[ref 43] Title I could also, in theory, be used for AI Oversight directly. For instance, the government could in theory attempt to institute a limited and partial licensing regime for AI Hardware and Creation by either (a) allocating limited AI Hardware resources such as chips to companies that satisfy licensing requirements promulgated by BIS, or (b) ordering companies that do not satisfy such requirements to prioritize work other than development of potentially dangerous frontier models.[ref 44]
The approach described would be an unprecedentedly aggressive use of Title I, and is unlikely to occur given the hesitancy of recent administrations to use the full scope of the presidential authorities Title I purports to convey. The allocations authority has not been used since the end of the Cold War,[ref 45] perhaps in part because of uncertainty regarding its legitimate scope.[ref 46] That said, guidance from the Defense Production Act Committee (“DPAC”), a body that “coordinate[s] and plan[s] for . . . the effective use of the priorities and allocations authorities,”[ref 47] indicates that the priorities and allocations authorities can be used to protect against, respond to, or recover from “acts of terrorism, cyberattacks, pandemics, and catastrophic disasters.”[ref 48] If the AI risk literature is to be believed, frontier AI models may soon be developed that pose risks related to all four of those categories.[ref 49]
The use of the priorities authority during the COVID-19 pandemic tends to show that, even in recognized and fairly severe national emergencies, extremely aggressive uses of the priorities and allocations authorities are unlikely. FEMA and the Department of Health and Human Services (“HHS”) used the priorities authority to require companies to produce N95 facemasks and ventilators on a government-mandated timeline,[ref 50] and HHS and the Department of Defense (“DOD”) also issued priority ratings to combat supply chain disruptions and expedite the acquisition of critical equipment and chemicals for vaccine development as part of Operation Warp Speed.[ref 51] But the Biden administration did not invoke the allocations authority at any point, and the priorities authority was used for its traditional purpose—to stimulate, rather than to prevent or regulate, the industrial production of specified products.
DPA Title III: subsidies for industry
Title III of the DPA authorizes the President to issue subsidies, purchase commitments and purchases, loan guarantees, and direct loans to incentivize the development of industrial capacity in support of the national defense.[ref 52] Title III also establishes a Defense Production Act Fund, from which all Title III actions are funded and into which government proceeds from Title III activities and appropriations by Congress are deposited.[ref 53] The use of Title III requires the President to make certain determinations, including that the resource or technology to be produced is essential to the national defense and that Title III is the most cost-effective and expedient means of ensuring the shortfall is addressed.[ref 54] The responsibility for making these determinations is non-delegable.[ref 55] The Title III award program is overseen by DOD.[ref 56]
Like Title I, Title III authorities were invoked a number of times in order to address the COVID-19 pandemic. For example, DOD invoked Title III in April 2020 to award $133 million for the production of N-95 masks and again in May 2020 to award $138 million in support of vaccine supply chain development.[ref 57] More recently, President Biden issued a Presidential Determination in March 2023 authorizing Title III expenditures to support domestic manufacturing of certain important microelectronics supply chain components—printed circuit boards and advanced packaging for semiconductor chips.[ref 58]
It has been suggested that Title III subsidies and purchase commitments could be used to incentivize increased domestic production of important AI hardware components, or to guarantee the purchase of data useful for military or intelligence-related machine learning applications.[ref 59] This would allow the federal government to exert some influence over the direction of the funded projects, although the significance of that influence would be limited by the amount of available funding in the DPA fund unless Congress authorized additional appropriations. With respect to Oversight, the government could attach conditions intended to facilitate tracking or licensing regimes to contracts entered into under Title III.[ref 60]
Export controls
- Potentially applicable to: Licensing AI Hardware, Creation, and Proliferation
- Already being used to license exports of AI Hardware; new uses relating to Oversight likely in the near future
Export controls are legislative or regulatory tools used to restrict the export of goods, software, and knowledge, usually in order to further national security or foreign policy interests. Export controls can also sometimes be used to restrict the “reexport” of controlled items from one foreign country to another, or to prevent controlled items from being shown to or used by foreign persons inside the U.S.
Currently active U.S. export control authorities include: (1) the International Traffic in Arms Regulations (“ITAR”), which control the export of weapons and other articles and services with strictly military applications;[ref 61] (2) multilateral agreements to which the United States is a state party, such as the Wassenaar Arrangement;[ref 62] and (3) the Export Administration Regulations (“EAR”), which are administered by BIS and which primarily regulate “dual use” items, which have both military and civilian applications.[ref 63] This section focuses on the EAR, the authority most relevant to Oversight.
Export Administration Regulations
The EAR incorporate the Commerce Control List (“CCL”).[ref 64] The CCL is a list, maintained by BIS, of more than 3,000 “items” which are prohibited from being exported, or prohibited from being exported to certain countries, without a license from BIS.[ref 65] The EAR define “item” and “export” broadly—software, data, and tangible goods can all be “items,” and “export” can include, for example, showing controlled items to a foreign national in the United States or posting non-public data to the internet.[ref 66] However, software or data that is “published,” i.e., “made available to the public without restrictions upon its further dissemination,” is generally not subject to the EAR. Thus, the EAR generally cannot be used to restrict the publication or export of free and open-source software.[ref 67]
The CCL currently contains a fairly broad set of export restrictions that require a license for exports to China of advanced semiconductor chips, input materials used in the fabrication of semiconductors, and semiconductor manufacturing equipment.[ref 68] These restrictions are explicitly intended to “limit the PRC’s ability to obtain advanced computing chips or further develop AI and ‘supercomputer’ capabilities for uses that are contrary to U.S. national security and foreign policy interests.”[ref 69] The CCL also currently restricts “neural computers”[ref 70] and a narrowly-defined category of AI software useful for analysis of drone imagery[ref 71]—“geospatial imagery ‘software’ ‘specially designed’ for training a Deep Convolutional Neural Network to automate the analysis of geospatial imagery and point clouds.”[ref 72]
In addition to the item-based CCL, the EAR include end-user controls, including an “Entity List” of individuals and companies subject to export licensing requirements.[ref 73] Some existing end-user controls are designed to protect U.S. national security interests by hindering the ability of rivals like China to effectively conduct defense-relevant AI research. For example, in December 2022 BIS added a number of “major artificial intelligence (AI) chip research and development, manufacturing and sales entities” that “are, or have close ties to, government organizations that support the Chinese military and the defense industry” to the Entity List.[ref 74]
The EAR also include, at 15 C.F.R. § 744, end-use based “catch-all” controls, which effectively prohibit the unlicensed export of items if the exporter knows or has reason to suspect that the item will be directly or indirectly used in the production, development, or use of missiles, certain types of drones, nuclear weapons, or chemical or biological weapons.[ref 75] Section 744 also imposes a license requirement on the export of items which the exporter knows are intended for a military end use.[ref 76]
Additionally, 15 C.F.R. § 744.6 requires “U.S. Persons” (a term which includes organizations as well as individuals) to obtain a license from BIS before “supporting” the design, development, production, or use of missiles or nuclear, biological, or chemical weapons, “supporting” the military intelligence operations of certain countries, or “supporting” the development or production of specified types of semiconductor chips in China. The EAR definition of “support” is extremely broad and covers “performing any contract, service, or employment you know may assist or benefit” the prohibited end uses in any way.[ref 77]
For both the catch-all and U.S. Persons restrictions, BIS is authorized to send so-called “is informed” letters to individuals or companies advising that a given action requires a license because the action might result in a prohibited end-use or support a prohibited end-use or end-user.[ref 78] This capability allows BIS to exercise a degree of control over exports and over the actions of U.S. Persons immediately, without going through the time-consuming process of Notice and Comment Rulemaking. For instance, BIS sent an “is informed” letter to NVIDIA on August 26, 2022, imposing a new license requirement on the export of certain chips to China and Russia, effective immediately, because BIS believed that there was a risk the chips would be used for military purposes.[ref 79]
BIS has demonstrated a willingness to update its semiconductor export regime quickly and flexibly. For instance, after BIS restricted exports of AI-relevant chips in a rule issued on October 7, 2022, Nvidia modified its market-leading A100 and H100 chips to comply with the regulations and began to export the resultant modified A800 and H800 chips to China.[ref 80] On October 17, 2023, BIS announced a new interim final rule prohibiting exports of A800 and H800 chips to China and waived the 30-day waiting period normally required by the Administrative Procedure Act so that the interim rule became effective just a few days after being announced.[ref 81] Commerce Secretary Gina Raimondo stated that “[i]f [semiconductor companies] redesign a chip around a particular cut line that enables them to do AI, I’m going to control it the very next day.”[ref 82]
In summation, the EAR currently impose a license requirement on a number of potentially dangerous actions relating to AI Hardware, Creation, and Proliferation. These controls have thus far been used primarily to restrict exports of AI hardware, but in theory they could also be used to impose licensing requirements on activities relating to AI creation and proliferation. The primary legal issue with this kind of regulation arises from the First Amendment.
Export controls and the First Amendment
Suppose that BIS determined that a certain AI model would be useful to terrorists or foreign state actors in the creation of biological weapons. Could BIS inform the developer of said model of this determination and prohibit the developer from making the model publicly available? Alternatively, could BIS add model weights which would be useful for training dangerous AI models to the CCL and require a license for their publication on the internet?
One potential objection to the regulations described above is that they would violate the First Amendment as unconstitutional prior restraints on speech. Courts have held that source code can be constitutionally protected expression, and in the 1990s export regulations prohibiting the publication of encryption software were struck down as unconstitutional prior restraints.[ref 83] However, the question of when computer code constitutes protected expression is a subject of continuing scholarly debate,[ref 84] and there is a great deal of uncertainty regarding the scope of the First Amendment’s application to export controls of software and training data. The argument for restricting model weights may be stronger than the argument for restricting other relevant software or code items, because model weights are purely functional rather than communicative; they tell a computer what to do, but cannot be read or interpreted by humans.[ref 85]
Currently, the EAR avoids First Amendment issues by allowing a substantial exception to existing licensing requirements for “published” information.[ref 86] A great deal of core First Amendment communicative speech, such as basic research in universities, is “published” and therefore not subject to the EAR. Non-public proprietary software, however, can be placed on the CCL and restricted in much the same manner as tangible goods, usually without provoking any viable First Amendment objection.[ref 87] Additionally, the EAR’s recently added “U.S. Persons” controls regulate actions rather than directly regulating software, and it has been argued that this allows BIS to exercise some control over free and open source software without imposing an unconstitutional prior restraint, since under some circumstances providing access to an AI model may qualify as unlawful “support” for prohibited end-uses.[ref 88]
Emergency powers
- Applicable to: Tracking and Licensing AI Hardware & Creation; Licensing Proliferation
- Already in use (IEEPA, to mandate know-your-customer requirements for IAAS providers pursuant to EO 14110); Unlikely to be used (§ 606(c))
The United States Code contains a number of statutes granting the President extraordinary powers that can only be used following the declaration of a national emergency. This section discusses two such emergency provisions—the International Emergency Economic Powers Act[ref 89] and § 606(c) of the Communications Act of 1934[ref 90]—and their existing and potential application to AI Oversight.
There are three existing statutory frameworks governing the declaration of emergencies: the National Emergencies Act (“NEA”),[ref 91] the Robert T. Stafford Disaster Relief and Emergency Assistance Act,[ref 92] and the Public Health Service Act.[ref 93] Both of the authorities discussed in this section can be invoked following an emergency declaration under the NEA.[ref 94] The NEA is a statutory framework that provides a procedure for declaring emergencies and imposes certain requirements and limitations on the exercise of emergency powers.[ref 95]
International Emergency Economic Powers Act
The most frequently invoked emergency authority under U.S. law is the International Emergency Economic Powers Act (“IEEPA”), which grants the President expansive powers to regulate international commerce.[ref 96] The IEEPA gives the President broad authority to impose a variety of economic sanctions on individuals and entities during a national emergency.[ref 97] The IEEPA has been “the sole or primary statute invoked in 65 of the 71”[ref 98] emergencies declared under the NEA since the NEA’s enactment in 1976.
The IEEPA authorizes the President to “investigate, regulate, or prohibit” transactions subject to U.S. jurisdiction that involve a foreign country or national.[ref 99] The IEEPA also authorizes the investigation, regulation, or prohibition of any acquisition or transfer involving a foreign country or national.[ref 100] The emergency must originate “in whole or in substantial part outside the United States” and must relate to “the national security, foreign policy, or economy of the United States.”[ref 101] There are some important exceptions to the IEEPA’s general grant of authority—all “personal communications” as well as “information” and “informational materials” are outside of the IEEPA’s scope.[ref 102] The extent to which these protections would prevent the IEEPA from effectively being used for AI Oversight is unclear, because there is legal uncertainty as to whether, e.g., the transfer of AI model training weights overseas would be covered by one or more of the exceptions. If the relevant interpretive questions are resolved in a manner conducive to strict regulation, a partial licensing regime could be implemented under the IEEPA by making transactions contingent on safety and security evaluations. For example, foreign companies could be required to follow certain safety and security measures in order to offer subscriptions or sell an AI model in the U.S., or U.S.-based labs could be required to undergo safety evaluations prior to selling subscriptions to an AI service outside the country.
EO 14110 invoked the IEEPA to support §§ 4.2(c) and 4.2(d), provisions requiring the Department of Commerce to impose “Know Your Customer” (“KYC”) reporting requirements on U.S. Infrastructure as a Service (“IAAS”) providers. The emergency declaration justifying this use of the IEEPA originated in EO 13694, “Blocking the Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities” (April 1, 2015), which declared a national emergency relating to “malicious cyber-enabled activities originating from, or directed by persons located, in whole or in substantial part, outside the United States.”[ref 103] BIS introduced a proposed rule to implement the EO’s KYC provisions on January 29, 2024.[ref 104] The proposed rule would require U.S. IAAS providers (i.e., providers of cloud-based on-demand compute, storage, and networking services) to submit a report to BIS regarding any transaction with a foreign entity that could result in the training of an advanced and capable AI model that could be used for “malicious cyber-enabled activity.”[ref 105] Additionally, the rule would require each U.S. IAAS provider to develop and follow an internal “Customer Identification Program.” Each Customer Identification Program would have to provide for verification of the identities of foreign customers, provide for collection and maintenance of certain information about foreign customers, and ensure that foreign resellers of the U.S. provider’s IAAS products similarly verify, collect, and maintain.[ref 106]
In short, the proposed rule is designed to allow BIS to track attempts at AI Creation by foreign entities who attempt to purchase the kinds of cloud compute resources required to train an advanced AI model, and to prevent such purchases from occurring. This tracking capability, if effectively implemented, would prevent foreign entities from circumventing export controls on AI Hardware by simply purchasing the computing power of advanced U.S. AI chips through the cloud.[ref 107] The EO’s use of the IEEPA has so far been considerably less controversial than the use of the DPA to impose reporting requirements on the creators of frontier models.[ref 108]
Communications Act of 1934, § 606(c)
Section 606(c) of the Communications Act of 1934 could conceivably authorize a licensure program for AI Creation or Proliferation in an emergency by allowing the President to direct the closure or seizure of any networked computers or data centers used to run AI systems capable of aiding navigation. However, it is unclear whether courts would interpret the Act in such a way as to apply to AI systems, and any such use of Communications Act powers would be completely unprecedented. Therefore, § 606(c) is unlikely to be used for AI Oversight.
Section 606(c) confers emergency powers on the President “[u]pon proclamation by the President that there exists war or a … national emergency” if it is deemed “necessary in the interest of national security or defense.” The National Emergency Act (“NEA”) of 1976 governs the declaration of a national emergency and established requirements for accountability and reporting during emergencies.[ref 109] Neither statute defines “national emergency.” In an emergency, the President may (1) “suspend or amend … regulations applicable to … stations or devices capable of emitting electromagnetic radiations”; (2) close “any station for radio communication, or any device capable of emitting electromagnetic radiations between 10 kilocycles and 100,000 megacycles [10 kHz–100 GHz], which is suitable for use as a navigational aid beyond five miles” and (3) authorize “use or control” of the same.[ref 110]
In other words, § 606(c) empowers the President to seize or shut down certain types of electronic “device” during a national emergency. The applicable definition of “device” could arguably encompass most of the computers, servers, and data centers utilized in AI Creation and Proliferation.[ref 111] Theoretically, § 606(c) could be invoked to sanction seizure or closure of these devices. However, § 606(c) has never been utilized, and there is significant uncertainty concerning whether courts would allow its application to implement a comprehensive program of AI oversight.
Federal funding conditions
- Potentially applicable to: Tracking and Licensing AI Hardware & AI Creation; Licensing AI Proliferation
- Reasonably likely to be used for Oversight in some capacity
Attaching conditions intended to promote AI safety to federal grants and contracts could be an effective way of creating a partial licensing regime for AI Creation and Proliferation. Such a regime could be circumvented by simply forgoing federal funding, but could still contribute to an effective overall scheme for Oversight.
Funding conditions for federal grants and contracts
Under the Federal Property and Administrative Services Act, also known as the Procurement Act,[ref 112] the President can “prescribe policies and directives” for government procurement, including via executive order.[ref 113] Generally, courts have found that the President may order agencies to attach conditions to federal contracts so long as a “reasonably close nexus”[ref 114] exists between the executive order and the Procurement Act’s purpose, which is to provide an “economical and efficient system” for procurement.[ref 115] This is a “lenient standard[],”[ref 116] and it is likely that an executive order directing agencies to include conditions intended to promote AI safety in all AI-related federal contracts would be upheld under it.
Presidential authority to impose a similar condition on AI-related federal grants via executive order is less clear. Generally, “the ability to place conditions on federal grants ultimately comes from the Spending Clause, which empowers Congress, not the Executive, to spend for the general welfare.”[ref 117] It is therefore likely that any conditions imposed on federal grants will be imposed by legislation rather than by executive order. However, plausible arguments for Presidential authority to impose grant conditions via executive order in certain circumstances do exist, and even in the absence of an explicit condition executive agencies often wield substantial discretion in administering grant programs.[ref 118]
Implementation of federal contract conditions
Government-wide procurement policies are set by the Federal Acquisition Regulation (“FAR”), which is maintained by the Office of Federal Procurement Policy (“OFPP”).[ref 119] A number of FAR regulations require the insertion of a specified clause into all contracts of a certain type; for example, FAR § 23.804 requires the insertion of clauses imposing detailed reporting and tracking requirements for ozone-depleting chemicals into all federal contracts for refrigerators, air conditioners, and similar goods.[ref 120] Amending the FAR to include a clause imposing regulations related to the safe development of AI and prohibiting the publication of any sufficiently advanced model that had not been reviewed and deemed safe in accordance with specified procedures would effectively impose a licensing requirement on AI Creation and Proliferation, albeit a requirement that would apply only to entities receiving government funding.
A less ambitious real-life approach to implementing federal contract conditions encouraging the safe development of AI under existing authorities appears in Executive Order 14110. Section 4.4(b) of that EO directs the White House Office of Science and Technology Policy (OSTP) to release a framework designed to encourage DNA synthesis companies to screen their customers, in order to reduce the danger of e.g. terrorist organizations acquiring the tools necessary to synthesize biological weapons.[ref 121] Recipients of federal research funding will be required to adhere to the OSTP’s Framework, which was released in April 2024.[ref 122]
Potential scope of oversight via conditions on federal funding
Depending on their nature and scope, conditions imposed on grants and contracts could facilitate the tracking and/or licensing of AI Hardware, Creation, and Proliferation. The conditions could, for example, specify best practices to follow during AI Creation, and prohibit labs that accepted federal funds from developing frontier models without observing said practices; this, in effect, would create a non-universally applicable licensing regime for AI Creation. The conditions could also specify procedures (e.g. audits by third-party or government experts) for certifying that a given model could safely be made public, and prohibit the release of any AI model developed using a sufficiently large training run until it was so certified. For Hardware, the conditions could require contractors and grantees to track any purchase or sale of the relevant chips and chipmaking equipment and report all such transactions to a specified government office.
The major limitation of Oversight via federal funding conditions is that the conditions might not apply to entities that did not receive funding from the federal government. However, it is possible that this regulatory gap could be at least partially closed by drafting the included conditions to prohibit contractors and grantees from contracting with companies that fail to abide by some or all of the conditions. This would be a novel and aggressive use of federal funding conditions, but would likely hold up in court.
FTC consumer protection authorities
- Applicable to: Tracking and Licensing AI Creation, Licensing AI Proliferation
- Unlikely to be used for licensing, but somewhat likely to be involved in tracking AI Creation in some capacity
The Federal Trade Commission Act (“FTC Act”) includes broad consumer protection authorities, two of which are identified in this section as being potentially relevant to AI Oversight. Under § 5 of the FTC Act, the Federal Trade Commission (“FTC”) can pursue enforcement actions in response to “unfair or deceptive acts or practices in or affecting commerce”[ref 123]; this authority could be relevant to licensing for AI creation and proliferation. And under § 6(b), the FTC can conduct industry studies that could be useful for tracking AI creation.
The traditional test for whether a practice is “unfair,” codified at § 5(n), asks whether the practice: (1) “causes or is likely to cause substantial injury to consumers” (2) which is “not reasonably avoidable by consumers themselves” and (3) is not “outweighed by countervailing benefits to consumers or to competition.”[ref 124] “Deceptive” practices have been defined as involving: (1) a representation, omission, or practice, (2) that is material, and (3) that is “likely to mislead consumers acting reasonably under the circumstances.”[ref 125]
FTC Act § 5 oversight
Many potentially problematic or dangerous applications of highly capable LLMs would involve “unfair or deceptive acts or practices” under § 5. For example, AI safety researchers have warned of emerging risks from frontier models capable of “producing and propagating highly persuasive, individually tailored, multi-modal disinformation.”[ref 126] A commercially available model with such capabilities would likely constitute a violation of § 5’s “deceptive practices” prong.[ref 127]
Furthermore, the FTC has in recent decades adopted a broad plain-meaning interpretation of the “unfair practices” prong, meaning that irresponsible AI development practices that impose risks on consumers could constitute an “unfair practice.”[ref 128] The FTC has recently conducted a litigation campaign to impose federal data security regulation via § 5 lawsuits, and this campaign could serve as a model for a future effort to require AI labs to implement AI safety best practices while developing and publishing frontier models.[ref 129] In its data security lawsuits, the FTC argued that § 5’s prohibition of unfair practices imposed a duty on companies to implement reasonable data security measures to protect their consumers’ data.[ref 130] The vast majority of the FTC’s data security cases ended in settlements that required the defendants to implement certain security best practices and agree to third party compliance audits.[ref 131] Furthermore, in several noteworthy data security cases, the FTC has reached settlements under which defendant companies have been required to delete models developed using illegally collected data.[ref 132]
The FTC can bring § 5 claims based on prospective or “likely” harms to consumers.[ref 133] And § 5 can be enforced against defendants whose conduct is not the most proximate cause of an injury, such as an AI lab whose product is foreseeably misused by criminals to deceive or harm consumers, when the defendant provided others with “the means and instrumentalities for the commission of deceptive acts or practices.”[ref 134] Thus, if courts are willing to accept that the commercial release of models developed without observation of AI safety best practices is an “unfair” or “deceptive” act or practice under § 5, the FTC could impose, on a case-by-case basis,[ref 135] something resembling a licensing regime addressing areas of AI creation and proliferation. As in the data security settlements, the FTC could attempt to reach settlements with AI labs requiring the implementation of security best practices and third party compliance audits, as well as the deletion of models created in violation of § 5. This would not be an effective permanent substitute for a formal licensing regime, but could function as a stop-gap measure in the short term.
FTC industry studies
Section 6(b) of the FTC Act authorizes the conduct of industry studies.[ref 136] The FTC has the authority to collect confidential business information to inform these studies, requiring companies to disclose information even in the absence of any allegation of wrongdoing. This capability could be useful for tracking AI Creation.
Limitations of FTC oversight authority
The FTC has already signaled that it intends to “vigorously enforce” § 5 against companies that use AI models to automate decisionmaking in a way that results in discrimination on the basis of race or other protected characteristics.[ref 137] Existing guidance also shows that the FTC is interested in pursuing enforcement actions against companies that use LLMs to deceive consumers.[ref 138] The agency has already concluded a few successful § 5 enforcement actions targeting companies that used (non-frontier) AI models to operate fake social media accounts and deceptive chatbots.[ref 139] And in August 2023 the FTC brought a § 5 “deceptive acts or practices” enforcement action alleging that a company named Automators LLC had deceived customers with exaggerated and untrue claims about the effectiveness of the AI tools it used, including the use of ChatGPT to create customer service scripts.[ref 140]
Thus far, however, there is little indication that the FTC is inclined to take on broader regulatory responsibilities with respect to AI safety. The § 5 prohibition on “unfair practices” has traditionally been used for consumer protection, and commentators have suggested that it would be an “awkward tool” for addressing more serious national-security-related AI risk scenarios such as weapons development, which the FTC has not traditionally dealt with.[ref 141] Moreover, even if the FTC were inclined to pursue an aggressive AI Oversight agenda, the agency’s increasingly politically divisive reputation might contribute to political polarization around the issue of AI safety and inhibit bipartisan regulatory and legislative efforts.
Committee on Foreign Investment in the United States
- Potentially applicable to: Tracking and/or Licensing AI Hardware and Creation
- Unlikely to be used to directly track or license frontier AI models, but could help to facilitate effective Oversight.
The Committee on Foreign Investment in the United States (“CFIUS”) is an interagency committee charged with reviewing certain foreign investments in U.S. businesses or real estate and with mitigating the national security risks created by such transactions.[ref 142] If CFIUS determines that a given investment threatens national security, CFIUS can recommend that the President block or unwind the transaction.[ref 143] Since 2012, Presidents have blocked six transactions at the recommendation of CFIUS, all of which involved an attempt by a Chinese investor to acquire a U.S. company (or, in one instance, U.S.-held shares of a German company).[ref 144] In three of the six blocked transactions, the company targeted for acquisition was a semiconductor company or a producer of semiconductor manufacturing equipment.[ref 145]
Congress expanded CFIUS’s scope and jurisdiction in 2018 by enacting the Foreign Investment Risk Review Modernization Act of 2018 (“FIRRMA”).[ref 146] FIRRMA was enacted in part because of a Pentagon report warning that China was circumventing CFIUS by acquiring minority stakes in U.S. startups working on “critical future technologies” including artificial intelligence.[ref 147] This, the report warned, could lead to large-scale technology transfers from the U.S. to China, which would negatively impact the economy and national security of the U.S.[ref 148] Before FIRRMA, CFIUS could only review investments that might result in at least partial foreign control of a U.S. business.[ref 149] Under Department of the Treasury regulations implementing FIRRMA, CFIUS can now review “any direct or indirect, non-controlling foreign investment in a U.S. business producing or developing critical technology.”[ref 150] President Biden specifically identified artificial intelligence as a “critical technology” under FIRRMA in Executive Order 14083.[ref 151]
CFIUS imposes, in effect, a licensing requirement for foreign investment in companies working on AI Hardware and AI Creation. It also facilitates tracking of AI Hardware and Creation, since it reduces the risk of cutting-edge American advances, subject to American Oversight, being clandestinely transferred to countries in which U.S. Oversight of any kind is impossible. A major goal of any AI Oversight regime will be to stymie attempts by foreign adversaries like China and Russia to acquire U.S. AI capabilities, and CFIUS (along with export controls) will play a major role in the U.S. government’s pursuit of this goal.
Atomic Energy Act
- Applicable to: Licensing AI Creation and Proliferation
- Somewhat unlikely to be used to create a licensing regime in the absence of new legislation
The Atomic Energy Act (“AEA”) governs the development and regulation of nuclear materials and information. The AEA prohibits the disclosure of “Restricted Data,” which phrase is defined to include all data concerning the “design, manufacture, or utilization of atomic weapons.”[ref 152] The AEA also prohibits communication, transmission, or disclosure of any “information involving or incorporating Restricted Data” when there is “reason to believe such data will be utilized to injure the United States or to secure an advantage to any foreign nation.” A sufficiently advanced frontier model, even one not specifically designed to produce information relating to nuclear weapons, might be capable of producing Restricted Data based on inferences from or analysis of publicly available information.[ref 153]
A permitting system that regulates access to Restricted Data already exists.[ref 154] Additionally, the Attorney General can seek a prospective court-ordered injunction against any “acts or practices” that the Department of Energy (“DOE”) believes will violate the AEA.[ref 155] Thus, licensing AI Creation and Proliferation under the AEA could be accomplished by promulgating DOE regulations stating that AI models that do not meet specified safety criteria are, in DOE’s judgment, likely to be capable of producing Restricted Data and therefore subject to the permitting requirements of 10 C.F.R. § 725.
However, there are a number of potential legal issues that make the application of the AEA to AI Oversight unlikely. For instance, there might be meritorious First Amendment challenges to the constitutionality of the AEA itself or to the licensing regime proposed above, which could be deemed a prior restraint of speech.[ref 156] Or, it might prove difficult to establish beforehand that an AI lab had “reason to believe” that a frontier model would be used to harm the U.S. or to secure an advantage for a foreign state.[ref 157]
Copyright law
- Potentially applicable to: Licensing AI Creation and Proliferation
- Unlikely to be used directly for Oversight, but will likely indirectly affect Oversight efforts
Intellectual property (“IP”) law will undoubtedly play a key role in the future development and regulation of generative AI. IP’s role in AI Oversight, narrowly understood, is more limited. That said, there are low-probability scenarios in which IP law could contribute to an ad hoc licensing regime for frontier AI models. This section discusses the possibility that U.S. Copyright law[ref 158] could contribute to a sort of licensing regime for frontier AI models. In September and October 2023, OpenAI was named as a defendant in a number of recent putative class action copyright lawsuits.[ref 159] The complaints in these suits allege that OpenAI trained GPT-3. GPT-3.5, and GPT-4 on datasets including hundreds of thousands of pirated books downloaded from a digital repository like Z-Library or LibGen.[ref 160] In December 2023, the New York Times filed a copyright lawsuit against OpenAI and Microsoft alleging that OpenAI infringed its copyrights by using Times articles in its training datasets.[ref 161] The Times also claimed that GPT-4 had “memorized” long sections of copyrighted articles and could “recite large portions of [them] verbatim” with “minimal prompting.”[ref 162]
The eventual outcome of these lawsuits is uncertain. Some commentators have suggested that the infringement case against OpenAI is strong and that the use of copyrighted material in a training run is copyright infringement.[ref 163] Others have suggested that using copyrighted work for an LLM training run falls under fair use, if it implicates copyright law at all, because training a model on works meant for human consumption is a transformative use.[ref 164]
In a worst-case scenario for AI labs, however, a loss in court could in theory result in an injunction prohibiting OpenAI from using copyrighted works in its training runs and statutory damages of up to $150,000 per copyrighted work infringed.[ref 165] The dataset that OpenAI is alleged to have used to train GPT-3, GPT-3.5, and GPT-4 contains over a 100,000 copyrighted works,[ref 166] meaning that the upper bound for potential statutory damages for OpenAI any other AI lab that used the same dataset to train a frontier model would be upwards of $15 billion.
Such a decision would have a significant impact on the development of frontier LLMs in the United States. The amount of text required to train a cutting-edge LLM is such that an injunction requiring OpenAI and its competitors to train their models without the use of any copyrighted material would require the labs to retool their approach to training runs.
Given the U.S. government’s stated commitment to maintaining U.S. leadership in Artificial Intelligence,[ref 167] it is unlikely that Congress would allow such a decision to inhibit the development of LLMs in the United States on anything resembling a permanent basis. But copyright law could in theory impose, however briefly, a de facto halt on large training runs in the United States. If this occurred, the necessity of Congressional intervention[ref 168] would create a natural opportunity for imposing a licensing requirement on AI Creation.
Antitrust authorities
- Applicable to: Tracking and Licensing AI Hardware and AI Creation
- Unlikely to be used directly for government tracking or licensing regimes, but could facilitate the creation of an imperfect private substitute for true Oversight
U.S. antitrust authorities include the Sherman Antitrust Act of 1890[ref 169] and § 5 of the FTC Act,[ref 170] both of which prohibit anticompetitive conduct that harms consumers. The Sherman Act is enforced primarily by the Department of Justice’s (“DOJ”) Antitrust Division, while § 5 of the FTC Act is enforced by the FTC.
This section focuses on a scenario in which non-enforcement of antitrust law under certain circumstances could facilitate the creation of a system of voluntary agreements between leading AI labs as an imperfect and temporary substitute for a governmental Oversight regime. As discussed above in Section 1, one promising short-term option to ensure the safe development of frontier models prior to the enactment of comprehensive Oversight legislation is for leading AI labs to enter into voluntary agreements to abide by responsible AI development practices. In the absence of cooperation, “harmful race dynamics” can develop in which the winner-take-all nature of a race to develop a valuable new technology can incentivize firms to disregard safety, transparency, and accountability.[ref 171]
A large number of voluntary agreements have been proposed, notably including the “Assist Clause” in OpenAI’s charter. The Assist Clause states that, in order to avoid “late-stage AGI development becoming a competitive race without time for adequate safety precautions,” OpenAI commits to “stop competing with and start assisting” any safety-conscious project that comes close to building Artificial General Intelligence before OpenAI does.[ref 172] Other potentially useful voluntary agreements include agreements to: (1) abide by shared safety standards, (2) engage in joint AI safety research ventures, (3) share information, including by mutual monitoring, sharing reports about incidents during safety testing, and comprehensively accounting for compute usage,[ref 173] pause or set an agreed pace for increases in the size of training runs for frontier AI models, and/or (5) pause specified research and development activities for all labs whenever one lab develops a model that exhibits dangerous capabilities.[ref 174]
Universal, government-administered regimes for tracking and licensing AI Hardware, Creation, and Proliferation would be preferable to the voluntary agreements described for a number of reasons, notably including ease of enforcement and a lack of economic incentives for companies to defect and refuse to agree. However, many of the proposed agreements could accomplish some of the goals of AI Oversight. Compute accounting, for example, would be a substitute (albeit an imperfect one) for comprehensive tracking of AI Hardware, and other information-sharing agreements would be imperfect substitutes for tracking AI Creation. Agreements to cooperatively pause upon discovery of dangerous capabilities would serve as an imperfect substitute for an AI Proliferation licensing regime. Agreements to abide by shared safety standards would substitute for an AI Creation licensing regime, although the voluntary nature of such an arrangement would to some extent defeat the point of a licensing regime.
All of the agreements proposed, however, raise potential antitrust concerns. OpenAI’s Assist Clause, for example, could accurately be described as an agreement to restrict competition,[ref 175] as could cooperative pausing agreements.[ref 176] Information-sharing agreements between competitors can also constitute antitrust violations, depending on the nature of the information shared and the purpose for which competitors share it.[ref 177] DOJ or FTC enforcement proceedings against AI companies over such voluntary agreements —or even uncertainty regarding the possibility of such enforcement actions— could deter AI labs from implementing a system for partial self-Oversight.
One option for addressing such antitrust concerns would be the use of § 708 of the DPA, discussed above in Section 1, to officially sanction voluntary agreements between companies that might otherwise violate antitrust laws. Alternatively, the FTC and the DOJ could publish guidance informing AI labs of their respective positions on whether and under what circumstances a given type of voluntary agreement could constitute an antitrust violation.[ref 178] In the absence of some sort of guidance or safe harbor, the risk-averse in-house legal teams at leading AI companies (some of which are presently involved in and/or staring down the barrel of ultra-high-stakes antitrust litigation[ref 179]) are unlikely to allow any significant cooperation or communication between rank and file employees.
There is significant historical precedent for national security concerns playing a role in antitrust decisions.[ref 180] Most recently, after the FTC secured a permanent injunction to prohibit what it viewed as anticompetitive conduct from semiconductor company Qualcomm, the DOJ filed an appellate brief in support of Qualcomm and in opposition to the FTC, arguing that the injunction would “significantly impact U.S. national security” and incorporating a statement from a DOD official to the same effect.[ref 181] The Ninth Circuit sided with Qualcomm and the DOJ, citing national security concerns in an order granting a stay[ref 182] and later vacating the injunction.[ref 183]
Biological Weapons Anti-Terrorism Act; Chemical Weapons Convention Implementation Act
- Potentially applicable to: Licensing AI Creation & Proliferation
- Unlikely to be used for AI oversight
Among the most pressing dangers posed by frontier AI models is the risk that sufficiently capable models will allow criminal or terrorist organizations or individuals to easily synthesize dangerous biological or chemical agents or to easily design and synthesize novel and catastrophically dangerous biological or chemical agents for use as weapons.[ref 184] The primary existing U.S. government authorities prohibiting the development and acquisition of biological and chemical weapons are the Biological Weapons Anti-Terrorism Act of 1989 (“BWATA”)[ref 185] and the Chemical Weapons Convention Implementation Act of 1998 (“CWCIA”),[ref 186] respectively.
The BWATA implements the Biological Weapons Convention (“BWC”), a multilateral international agreement that prohibits the development, production, acquisition, transfer, and stockpiling of biological weapons.[ref 187] The BWC requires, inter alia, that states parties implement “any necessary measures” to prevent the proliferation of biological weapons within their territorial jurisdictions.[ref 188] In order to accomplish this purpose, Section 175(a) of the BWATA prohibits “knowingly develop[ing], produc[ing], stockpil[ing], transfer[ing], acquir[ing], retain[ing], or possess[ing]” any “biological agent,” “toxin,” or “delivery system” for use as a weapon, “knowingly assist[ing] a foreign state or any organization” to do the same, or “attempt[ing], threaten[ing], or conspir[ing]” to do either of the above.[ref 189] Under § 177, the Government can file a civil suit to enjoin the conduct prohibited in § 175(a).[ref 190]
The CWCIA implements the international Convention on the Prohibition of the Development, Stockpiling, and Use of Chemical Weapons and on Their Destruction.[ref 191] Under the CWCIA it is illegal for a person to “knowingly develop, produce, otherwise acquire, transfer directly or indirectly, receive, stockpile, retain, own, possess, or use, or threaten to use, any chemical weapon,” or to “assist or induce, in any way, any person to” do the same.[ref 192] Under § 229D, the Government can file a civil suit to enjoin the conduct prohibited in § 229 or “the preparation or solicitation to engage in conduct prohibited under § 229.”[ref 193]
It could be argued that publicly releasing an AI model that would be a useful tool for the development or production of biological or chemical weapons would amount to “knowingly assist[ing]” (or attempting or conspiring to knowingly assist) in the development of said weapons, under certain circumstances. Alternatively, with respect to chemical weapons, it could be argued that the creation or proliferation of such a model would amount to “preparation” to knowingly assist in the development of said weapons. If these arguments are accepted, then the U.S. government could, in theory, impose a de facto licensing regime on frontier AI creation and proliferation by suing to enjoin labs from releasing potentially dangerous frontier models publicly.
This, however, would be a novel use of the BWATA and/or the CWCIA. Cases interpreting § 175(a)[ref 194] and § 229[ref 195] have typically dealt with criminal prosecutions for the actual or supposed possession of controlled biological agents or chemical weapons or delivery systems. There is no precedent for a civil suit under §§ 177 or 229D to enjoin the creation or proliferation of a dual-use technology that could be used by a third party to assist in the creation of biological or chemical weapons. Furthermore, it is unclear whether courts would accept that the creation of such a dual-use model rises to the level of “knowingly” assisting in the development of chemical or biological weapons or preparing to knowingly assist in the development of chemical weapons.[ref 196]
A further obstacle to the effective use of the BWATA and/or CWCIA for oversight of AI creation or proliferation is the lack of any existing regulatory apparatus for oversight. BIS oversees a licensing regime implementing certain provisions of the Chemical Weapons Convention,[ref 197] but this regime restricts only the actual production or importation of restricted chemicals, and says nothing about the provision of tools that could be used by third parties to produce chemical weapons.[ref 198] To effectively implement a systematic licensing regime based on §§ 177 and/or 229D, rather than an ad hoc series of lawsuits attempting to restrict specific models on a case-by-case basis, new regulations would need to be promulgated.
Federal Select Agent Program
- Potentially applicable to: Tracking and/or Licensing AI Creation and Proliferation
- Unlikely to be used for AI Oversight
Following the anthrax letter attacks that killed 5 people and caused 17 others to fall ill in the fall of 2001, Congress passed the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 (“BPRA”)[ref 199] in order “to improve the ability of the United States to prevent, prepare for, and respond to bioterrorism and other public health emergencies.”[ref 200] The BPRA authorizes HHS and the United States Department of Agriculture to regulate the possession, use, and transfer of certain dangerous biological agents and toxins; this program is known as the Federal Select Agent Program (“FSAP”).
The BPRA includes, at 42 U.S.C. § 262a, a section that authorizes “Enhanced control of dangerous biological agents and toxins” by HHS. Under § 262a(b), HHS is required to “provide for… the establishment and enforcement of safeguard and security measures to prevent access to [FSAP agents and toxins] for use in domestic or international terrorism or for any other criminal purpose.”[ref 201]
Subsection 262a(b) is subtitled “Regulation of transfers of listed agents and toxins,” and existing HHS regulations promulgated pursuant to § 262a(b) are limited to setting the processes for HHS authorization of transfers of restricted biological agents or toxins from one entity to another.[ref 202] However, it has been suggested that § 262a(b)’s broad language could be used to authorize a much broader range of prophylactic security measures to prevent criminals and/or terrorist organizations from obtaining controlled biological agents. A recent article in the Journal of Emerging Technologies argues that HHS has statutory authority under § 262a(b) to implement a genetic sequence screening requirement for commercial gene synthesis providers, requiring companies that synthesize DNA to check customer orders against a database of known dangerous pathogens to ensure that they are “not unwittingly participating in bioweapon development.”[ref 203]
As discussed in the previous section, one of the primary risks posed by frontier AI models is that sufficiently capable models will facilitate the synthesis by criminal or terrorist organizations of dangerous biological agents, including those agents regulated under the FSAP. HHS’s Office for the Assistant Secretary of Preparedness and Response also seems to view itself as having authority under the FSAP to make regulations to protect against synthetic “novel high-risk pathogens.”[ref 204] If HHS decided to adopt an extremely broad interpretation of its authority under § 262a(b), therefore, it could in theory “establish[] and enforce[]… safeguard and security measures to prevent access” to agents and toxins regulated by the FSAP by creating a system for Oversight of frontier AI models. HHS is not well-positioned, either in terms of resources or technical expertise, to regulate frontier AI models generally, but might be capable of effectively overseeing a tracking or licensing regime for AI Creation and Proliferation that covered advanced models designed for drug discovery, gene editing, and similar tasks.[ref 205]
However, HHS appears to view its authority under § 262a far too narrowly to undertake any substantial AI Oversight responsibility under its FPAS authorities.[ref 206] Even if HHS did make the attempt, courts would likely view an attempt to institute a licensing regime solely on the basis of § 262a(b), without any further authorization from Congress, as ultra vires.[ref 207] In short, the Federal Select Agent Program in its current form is unlikely to be used for AI Oversight.
Chips for Peace: how the U.S. and its allies can lead on safe and beneficial AI
This piece was originally published in Lawfare.
The United States and its democratic allies can lead in AI and use this position to advance global security and prosperity.
On Dec. 8, 1953, President Eisenhower addressed the UN General Assembly. In his “Atoms for Peace” address, he set out the U.S. view on the risks and hopes for a nuclear future, leveraging the U.S.’s pioneering lead in that era’s most critical new technology in order to make commitments to promote its positive uses while mitigating its risks to global security. The speech laid the foundation for the international laws, norms, and institutions that have attempted to balance nuclear safety, nonproliferation of nuclear weapons, and peaceful uses of atomic energy ever since.
As a diverse class of largely civilian technologies, artificial intelligence (AI) is unlike nuclear technology in many ways. However, at the extremes, the stakes of AI policy this century might approach those of nuclear policy last century. Future AI systems may have the potential to unleash rapid economic growth and scientific advancement —or endanger all of humanity.
The U.S. and its democratic allies have secured a significant lead in AI supply chains, development, deployment, ethics, and safety. As a result, they have an opportunity to establish new rules, norms, and institutions that protect against extreme risks from AI while enabling widespread prosperity.
The United States and its allies can capitalize on that opportunity by establishing “Chips for Peace,” a framework with three interrelated commitments to address some of AI’s largest challenges.
First, states would commit to regulating their domestic frontier AI development and deployment to reduce risks to public safety and global security. Second, states would agree to share the benefits of safe frontier AI systems broadly, especially with states that would not benefit by default. Third, states would coordinate to ensure that nonmembers cannot undercut the other two commitments. This could be accomplished through, among other tools, export controls on AI hardware and cloud computing. The ability of the U.S. and its allies to exclude noncomplying states from access to the chips and data centers that enable the development of frontier AI models undergirds the whole agreement, similar to how regulation of highly enriched uranium undergirds international regulation of atomic energy. Collectively, these three commitments could form an attractive package: an equitable way for states to advance collective safety while reaping the benefits of AI-enabled growth.
Three grand challenges from AI
The Chips for Peace framework is a package of interrelated and mutually reinforcing policies aimed at addressing three grand challenges in AI policy.
The first challenge is catastrophe prevention. AI systems carry many risks, and Chips for Peace does not aim to address them all. Instead, Chips for Peace focuses on possible large-scale risks from future frontier AI systems: general-purpose AI systems at the forefront of capabilities. Such “catastrophic” risks are often split into misuse and accidents.
For misuse, the domain that has recently garnered the most attention is biosecurity: specifically, the possibility that future frontier AI systems could make it easier for malicious actors to engineer and weaponize pathogens, especially if coupled with biological design tools. Current generations of frontier AI models are not very useful for this. When red teamers at RAND attempted to use large language model (LLM) assistants to plan a more viable simulated bioweapon attack, they found that the LLMs provided answers that were inconsistent, inaccurate, or merely duplicative of what was readily discoverable on the open internet. It is reasonable to worry, though, that future frontier AI models might be more useful to attackers. In particular, lack of tacit knowledge may be an important barrier to successfully constructing and implementing planned attacks. Future AI models with greater accuracy, scientific knowledge, reasoning capabilities, and multimodality may be able to compensate for attackers’ lack of tacit knowledge by providing real-time tailored troubleshooting assistance to attackers, thus narrowing the gap between formulating a plausible high-level plan and “successfully” implementing it.
For accidental harms, the most severe risk might come from future increasingly agentic frontier AI systems: “AI systems that can pursue complex goals with limited direct supervision” through use of computers. Such a system could, for example, receive high-level goals from a human principal in natural language (e.g., “book an island getaway for me and my family next month”), formulate a plan about how to best achieve that goal (e.g., find availability on family calendars, identify possible destinations, secure necessary visas, book hotels and flights, arrange for pet care), and take or delegate actions necessary to execute on that plan (e.g., file visa applications, email dog sitters). If such agentic systems are invented and given more responsibility than managing vacations—such as managing complex business or governmental operations—it will be important to ensure that they are easily controllable. But our theoretical ability to reliably control these agentic AI systems is still very limited, and we have no strong guarantee that currently known methods will work for smarter-than-human AI agents, should they be invented. Loss of control over such agents might entail inability to prevent them from harming us.
Time will provide more evidence about whether and to what extent these are major risks. However, for now there is enough cause for concern to begin thinking about what policies could reduce the risk of such catastrophes, should further evidence confirm the plausibility of these harms and justify actual state intervention.
The second—no less important—challenge is ensuring that the post-AI economy enables shared prosperity. AI is likely to present acute challenges to this goal. In particular, AI has strong tendencies towards winner-take-all dynamics, meaning that, absent redistributive efforts, the first countries to develop AI may reap an outsized portion of its benefit and make catch-up growth more difficult. If AI labor can replace human labor, then many people may struggle to earn enough income, including the vast majority of people who do not own nearly enough financial assets to live off of. I personally think using the economic gains from AI to uplift the entire global economy is a moral imperative. But this would also serve U.S. national security. A credible, U.S.-endorsed vision for shared prosperity in the age of AI can form an attractive alternative to the global development initiatives led by China, whose current technological offerings are undermining the U.S.’s goals of promoting human rights and democracy, including in the Global South.
The third, meta-level challenge is coordination. A single state may be able to implement sensible regulatory and economic policies that address the first two challenges locally. But AI development and deployment are global activities. States are already looking to accelerate their domestic AI sectors as part of their grand strategy, and they may be tempted to loosen their laws to attract more capital and talent. They may also wish to develop their own state-controlled AI systems. But if the price of lax AI regulation is a global catastrophe, all states have an interest in avoiding a race to the bottom by setting and enforcing strong and uniform baseline rules.
The U.S.’s opportunity to lead
The U.S. is in a strong position to lead an effort to address these challenges, for two main reasons: U.S. leadership throughout much of the frontier AI life cycle and its system of alliances.
The leading frontier AI developers—OpenAI (where, for disclosure, I previously worked), Anthropic, Google DeepMind, and Meta—are all U.S. companies. The largest cloud providers that host the enormous (and rising) amounts of computing power needed to train a frontier AI model—Amazon, Microsoft, Google, and Meta—are also American. Nvidia chips are the gold standard for training and deploying large AI models. A large, dynamic, and diverse ecosystem of American AI safety, ethics, and policy nonprofits and academic institutions have contributed to our understanding of the technology, its impacts, and possible safety interventions. The U.S. government has invested substantially in AI readiness, including through the CHIPS Act, the executive order on AI, and the AI Bill of Rights.
Complementing this leadership is a system of alliances linking the United States with much of the world. American leadership in AI depends on the notoriously complicated and brittle semiconductor supply chain. Fortunately, however, key links in that supply chain are dominated by the U.S. or its democratic allies in Asia and Europe. Together, these countries contribute more than 90 percent of the total value of the supply chain. Taiwan is the home to TSMC, which fabricates 90 percent of advanced AI chips. TSMC’s only major competitors are Samsung (South Korea) and Intel (U.S.). The Netherlands is home to ASML, the world’s only company capable of producing the extreme ultraviolet lithography tools needed to make advanced AI chips. Japan, South Korea, Germany, and the U.K. all hold key intellectual property or produce key inputs to AI chips, such as semiconductor manufacturing equipment or chip wafers. The U.K. has also catalyzed global discussion about the risks and opportunities from frontier AI, starting with its organization of the first AI Safety Summit last year and its trailblazing AI Safety Institute. South Korea recently hosted the second summit, and France will pick up that mantle later this year.
These are not just isolated strengths—they are leading to collective action. Many of these countries have been coordinating with the U.S. on export controls to retain control over advanced computing hardware. The work following the initial AI Safety Summit—including the Bletchley Declaration, International Scientific Report on the Safety of Advanced AI, and Seoul Declaration—also shows increased openness to multilateral cooperation on AI safety.
Collectively, the U.S. and its allies have a large amount of leverage over frontier AI development and deployment. They are already coordinating on export controls to maintain this leverage. The key question is how to use that leverage to address this century’s grand challenges.
Chips for Peace: three commitments for three grand challenges
Chips for Peace is a package of three commitments—safety regulation, benefit-sharing, and nonproliferation—which complement and strengthen each other. For example, benefit-sharing compensates states for the costs associated with safety regulation and nonproliferation, while nonproliferation prevents nonmembers from undermining the regulation and benefit-sharing commitments. While the U.S. and its democratic allies would form the backbone of Chips for Peace due to their leadership in AI hardware and software, membership should be open to most states that are willing to abide by the Chips for Peace package.
Safety regulation
As part of the Chips for Peace package, members would first commit to implementing domestic safety regulation. Member states would commit to ensuring that any frontier AI systems developed or deployed within their jurisdiction must meet consistent safety standards narrowly tailored to prevent global catastrophic risks from frontier AI. Monitoring of large-scale compute providers would enable enforcement of these standards.
Establishing a shared understanding of catastrophic risks from AI is the first step toward effective safety regulation. There is already exciting consensus formation happening here, such as through the International Scientific Report on the Safety of Advanced AI and the Seoul Declaration.
The exact content of safety standards for frontier AI is still an open question, not least because we currently do not know how to solve all AI safety problems. Current methods of “aligning” (i.e., controlling) AI behavior rely on our ability to assess whether that behavior is desirable. For behaviors that humans can easily assess, such as determining whether paragraph-length text outputs are objectionable, we can use techniques such as reinforcement learning from human feedback and Constitutional AI. These techniques already have limitations. These limitations may become more severe as AI systems’ behaviors become more complicated and therefore more difficult for humans to evaluate.
Despite our imperfect knowledge of how to align AI systems, there are some frontier AI safety recommendations that are beginning to garner consensus. One emerging suggestion is to start by evaluating such models for specific dangerous capabilities prior to their deployment. If a model lacks capabilities that meaningfully contribute to large-scale risks, then it should be outside the jurisdiction of Chips for Peace and left to individual member states’ domestic policy. If a model has dangerous capabilities sufficient to pose a meaningful risk to global security, then there should be clear rules about whether and how the model may be deployed. In many cases, basic technical safeguards and traditional law enforcement will bring risk down to a sufficient level, and the model can be deployed with those safeguards in place. Other cases may need to be treated more restrictively. Monitoring the companies using the largest amounts of cloud compute within member states’ jurisdictions should allow states to reliably identify possible frontier AI developers, while imposing few constraints on the vast majority of AI development.
Benefit-sharing
To legitimize and drive broad adoption of Chips for Peace as a whole—and compensate for the burdens associated with regulation—members would also commit to benefit-sharing. States that stand to benefit the most from frontier AI development and deployment by default would be obligated to contribute to programs that ensure benefits from frontier AI are broadly distributed, especially to member states in the Global South.
We are far from understanding what an attractive and just benefit-sharing regime would look like. “Benefit-sharing,” as I use the term, is supposed to encompass many possible methods. Some international regulatory regimes, like the International Atomic Energy Agency (IAEA), contain benefit-sharing programs that provide some useful precedent. However, some in the Global South understandably feel that such programs have fallen short of their lofty aspirations. Chips for Peace may also have to compete with more laissez-faire offers for technological aid from China. To make Chips for Peace an attractive agreement for states at all stages of development, states’ benefit-sharing commitments will have to be correspondingly ambitious. Accordingly, member states likely to be recipients of such benefit-sharing should be in the driver’s seat in articulating benefit-sharing commitments that they would find attractive and should be well represented from the beginning in shaping the overall Chips for Peace package. Each state’s needs are likely to be different, so there is not likely to be a one-size-fits-all benefit-sharing policy. Possible forms of benefit-sharing from which such states could choose could include subsidized access to deployed frontier AI models, assistance tailoring models to local needs, dedicated inference capacity, domestic capacity-building, and cash.
A word of caution is warranted, however. Benefit-sharing commitments need to be generous enough to attract widespread agreement, justify the restrictive aspects of Chips for Peace, and advance shared prosperity. But poorly designed benefit-sharing could be destabilizing, such as if it enabled the recipient state to defect from the agreement but still walk away with shared assets (e.g., compute and model weights) and thus undermine the nonproliferation goals of the agreement. Benefit-sharing thus needs to be simultaneously empowering to recipient states and robust to their defection. Designing technical and political tools that accomplish both of these goals at once may therefore be crucial to the viability of Chips for Peace.
Nonproliferation
A commitment to nonproliferation of harmful or high-risk capabilities would make the agreement more stable. Member states would coordinate on policies to prevent non-member states from developing or possessing high-risk frontier AI systems and thereby undermining Chips for Peace.
Several tools will advance nonproliferation. The first is imposing cybersecurity requirements that prevent exfiltration of frontier AI model weights. Second, more speculatively, on-chip hardware mechanisms could prevent exported AI hardware from being used for certain risky purposes.
The third possible tool is export controls. The nonproliferation aspect of Chips for Peace could be a natural broadening and deepening of the U.S.’s ongoing efforts to coordinate export controls on AI chips and their inputs. These efforts rely on the cooperation of allies. Over time, as this system of cooperation becomes more critical, these states may want to formalize their coordination, especially by establishing procedures that check the unilateral impulses of more powerful member states. In this way, Chips for Peace could initially look much like a new multilateral export control regime: a 21st-century version of COCOM, the Cold War-era Coordinating Committee for Multilateral Export Controls (the predecessor of the current Wassenaar Arrangement). Current export control coordination efforts could also expand beyond chips and semiconductor manufacturing equipment to include large amounts of cloud computing capacity and the weights of models known to present a large risk. Nonproliferation should also include imposition of security standards on parties possessing frontier AI models. The overall goal would be to reduce the chance that nonmembers can indigenously develop, otherwise acquire (e.g., through espionage or sale), or access high-risk models, except under conditions multilaterally set by Chips for Peace states-parties.
As the name implies, this package of commitments draws loose inspiration from the Treaty on the Non-Proliferation of Nuclear Weapons and the IAEA. Comparisons to these precedents could also help Chips for Peace avoid some of the missteps of past efforts.
Administering Chips for Peace
How would Chips for Peace be administered? Perhaps one day we will know how to design an international regulatory body that is sufficiently accountable, legitimate, and trustworthy for states to be willing to rely on it to directly regulate their domestic AI industries. But this currently seems out of reach. Even if states perceive international policymaking in this domain as essential, they are understandably likely to be quite jealous of their sovereignty over their domestic AI industries.
A more realistic approach might be harmonization backed by multiple means of verifying compliance. States would come together to negotiate standards that are promulgated by the central intergovernmental organization, similar to the IAEA Safety Standards or Financial Action Task Force (FATF) Recommendations. Member states would then be responsible for substantial implementation of these standards in their own domestic regulatory frameworks.
Chips for Peace could then rely on a number of tools to detect and remedy member state noncompliance with these standards and thus achieve harmonization despite the international standards not being directly binding on states. The first would be inspections or evaluations performed by experts at the intergovernmental organization itself, as in the IAEA. The second is peer evaluations, where member states assess each other’s compliance. This is used in both the IAEA and the FATF. Finally, and often implicitly, the most influential member states, such as the U.S., use a variety of tools—including intelligence, law enforcement (including extraterritorially), and diplomatic efforts—to detect and remedy policy lapses.
The hope is that these three approaches combined may be adequate to bring compliance to a viable level. Noncompliant states would risk being expelled from Chips for Peace and thus cut off from frontier AI hardware and software.
Open questions and challenges
Chips for Peace has enormous potential, but an important part of ensuring its success is acknowledging the open questions and challenges that remain. First, the analogy between AI chips and highly enriched uranium (HEU) is imperfect. Most glaringly, AI models (and therefore AI chips) have a much wider range of beneficial and benign applications than HEU. Second, we should be skeptical that implementing Chips for Peace will be a simple matter of copying the nuclear arms control apparatus to AI. While we can probably learn a lot from nuclear arms control, nuclear inspection protocols took decades to evolve, and the different technological features of large-scale AI computing will necessitate new methods of monitoring, verifying, and enforcing agreements.
Which brings us to the challenge of monitoring, verification, and enforcement (MVE) more generally. We do not know whether and how MVE can be implemented at acceptable costs to member states and their citizens. There are nascent proposals for how hardware-based methods could enable highly reliable and (somewhat) secrecy-preserving verification of claims about how AI chips have been used, and prevent such chips from being used outside an approved setting. But we do not yet know how robust these mechanisms can be made, especially in the face of well-resourced adversaries.
Chips for Peace probably works best if most frontier AI development is done by private actors, and member states can be largely trusted to regulate their domestic sectors rigorously and in good faith. But these assumptions may not hold. In particular, perceived national security imperatives may drive states to become more involved in frontier AI development, such as through contracting for, modifying, or directly developing frontier AI systems. Asking states to regulate their own governmental development of frontier AI systems may be harder than asking them to regulate their private sectors. Even if states are not directly developing frontier AI systems, they may also be tempted to be lenient toward their national champions to advance their security goals.
Funding has also been a persistent issue in multilateral arms control regimes. Chips for Peace would likely need a sizable budget to function properly, but there is no guarantee that states will be more financially generous in the future. Work toward designing credible and sustainable funding mechanisms for Chips for Peace could be valuable.
Finally, although I have noted that the U.S.’s democratic allies in Asia and Europe would form the core of Chips for Peace due to their collective ability to exclude parties from the AI hardware supply chain, I have left open the question of whether membership should be open only to democracies. Promoting peaceful and democratic uses of AI should be a core goal of the U.S. But the challenges from AI can and likely will transcend political systems. China has shown some initial openness to preventing competition in AI from causing global catastrophe. China is also trying to establish an independent semiconductor ecosystem despite export controls on chips and semiconductor manufacturing equipment. If these efforts are successful, Chips for Peace would be seriously weakened unless China was admitted. As during the Cold War, we may one day have to create agreements and institutions that cross ideological divides in the shared interest of averting global catastrophe.
While the risk of nuclear catastrophe still haunts us, we are all much safer due to the steps the U.S. took last century to manage this risk.
AI may bring risks of a similar magnitude this century. The U.S. may once again be in a position to lead a broad, multilateral coalition to manage these enormous risks. If so, a Chips for Peace model may manage those risks while advancing broad prosperity.
International law and advanced AI: exploring the levers for ‘hard’ control
The question of how artificial intelligence (AI) is to be governed has risen rapidly up the global agenda – and in July 2023, United Nations Secretary-General António Guterres raised the possibility of the “creation of a new global body to mitigate the peace and security risks of AI.” While the past year has seen the emergence of multiple initiatives for AI’s international governance – by states, international organizations and within the UN system – most of these remain in the realm of non-binding ‘soft law.’ However, many influential voices in the debate are increasingly arguing that the challenge of future AI systems means that international AI governance would eventually need to include elements that are legally binding.
If and when states choose to take up this challenge and institute binding international rules on advanced AI – either under a comprehensive global agreement, or between a small group of allied states – there are three principal areas where such controls might usefully bite. First, states might agree to controls on particular end uses of AI that are considered most risky or harmful, drawing on the European Union’s new AI Act as a general model. Second, controls might be introduced on the technology itself, structured around the development of certain types of AI systems, irrespective of use – taking inspiration from arms control regimes and other international attempts to control or set rules around certain forms of scientific research. Third, states might seek to control the production and dissemination of the industrial inputs that power AI systems – principally the computing power that drives AI development – harmonizing export controls and other tools of economic statecraft.
Ahead of the upcoming United Nations Summit of the Future and the French-hosted international AI summit in 2025, this post explores these three possible control points and the relative benefits of each in addressing the challenges posed by advanced AI. It also addresses the structural questions and challenges that any binding regime would need to address – including its breadth in terms of state participation, how participation might be incentivized, the role that private sector AI labs might play, and the means by which equitable distribution of AI’s benefits could be enabled. This post is informed by ongoing research projects into the future of AI international governance undertaken by the Institute for Law & AI, Lawfare’s Legal AI Safety Initiative, and others.
Hard law approaches to AI governance
The capabilities of AI systems have advanced rapidly over the past decade. While these systems present significant opportunities for societal benefit, they also engender new risks and challenges. Possible risks from the next wave of general-purpose foundation models, deemed “frontier” or “advanced AI,” include increases in inequality, misuse by harmful actors, and dangerous malfunctions. Moreover, AI agents that are able to make and execute long-term plans may soon proliferate, and would pose particular challenges.
As a result of these developments, states are beginning to take concrete steps to regulate AI at the domestic level. This includes the United States’ Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, the European Union’s AI Act, the UK’s AI White Paper and subsequent public consultation, and Chinese laws covering both the development and use of various AI systems. At the same time, given the rapid pace of change and cross-border nature of AI development and potential harms, it is increasingly recognized that domestic regulation alone will likely not be adequate to address the full spread of challenges that advanced AI systems pose.
As a result, recent years have also witnessed the emergence of a growing number of initiatives for international coordination of AI policy. In the twenty months since the launch of OpenAI’s ChatGPT propelled AI to the top of the policy agenda, we have seen two international summits on AI safety; the Council of Europe conclude its Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law; the G7 launch its Hiroshima Process on responsible AI governance; and the UN launch an Advisory Body on international AI governance.
These ongoing initiatives are unlikely to represent the limits of states’ ambitions for AI coordination on the international plane. Indeed, should the pace of AI capability development continue as it has over the last decade, it seems likely that in the coming years states may choose to pursue some form of binding ‘hard law’ international governance for AI – moving beyond the mostly soft law commitments that have characterized today’s diplomatic efforts. Geopolitical developments, a rapid jump in AI capabilities, or a significant AI security incident or crisis, might also lead states to come to support a hard law approach. Throughout the course of 2023, several influential participants in the debate began to raise the possibility that binding international governance may be necessary, once AI systems reach a certain capability level – including most notably AI lab OpenAI. A number of political and moral authorities have gone beyond this and called for the immediate institution of binding international controls on AI – including the influential group of former politicians The Elders who have called for an “international treaty establishing a new international AI safety agency,” and Pope Francis who has urged the global community to adopt a “binding international treaty that regulates the development and use of artificial intelligence in its many forms.”
To date these calls for binding international governance have only been made at a high level of abstraction, without inclusion of detailed proposals for how a binding international AI governance regime might be structured or what activities should be controlled. Moreover, the advanced state of the different soft law approaches currently in progress mean that the design and legal form of any hard law regime that is eventually instituted would be heavily conditioned by other AI governance initiatives or institutions that precede it. Nevertheless, given the significant possibility of states beginning discussion of binding AI governance in the coming years, there is value in surveying the areas where controls could be implemented, assessing the contribution these controls might make in addressing the challenges of AI, and identifying the relevant institutional antecedents.
Three control points
There are three main areas where binding international controls on AI might bite: on particular ‘downstream’ uses of AI, on the upstream ‘development’ of AI systems, and on the industrial inputs that underpin the development of AI systems.
Downstream uses of AI
If the primary motivation behind states introducing international controls is a desire to mitigate the perceived risks from advanced AI, then the most natural approach would be to structure those controls around the particular AI uses that are considered to pose the greatest level of risk. The most prominent domestic AI regulation – the European Union’s AI Act – follows this approach, introducing different tiers of control for uses of AI systems based around the perceived risk of those use cases. Those that are deemed most harmful – for example the use of AI for social-scoring or in biometric systems put in place to predict criminality – are prohibited outright.
This form of control could be replicated at an international level. Existing international law imposes significant constraints on certain uses of AI – such as the protections provided by international human rights law and international humanitarian law. However, explicitly identifying and controlling particular harmful AI uses would add an additional layer of granularity to these constraints. Should states wish to do so, arms control agreements offer one model for how this could be done.
The principal benefit of a use-based approach to international control of AI is its simplicity: where particular AI uses are most harmful, they can be controlled or prohibited. States should in theory also be able to update any new treaty regime, adding additional harmful uses of AI to a controlled list should they wish to do so – and if they are able to agree on these. Nevertheless, structuring international controls solely around identified harmful uses of AI also has certain limitations. Most importantly, while such a use-based governance regime would have a significant impact in addressing the risks posed by the deliberate misuse of AI, its impact in reducing other forms of AI risk is less clear.
As reported by the 2024 International Scientific Report on the Safety of Advanced AI, advanced AI systems may also pose risks stemming from the potential malfunction of those systems – regardless of their particular application or form of use. The “hallucinations” generated by the most advanced chatbots, in spite of their developers best intentions, are an early example of this. At the extreme, certain researchers have posited that developers might lose the ability to control the most advanced systems. The malfunction or loss of control of more advanced systems could have severe implications as these systems are increasingly incorporated into critical infrastructure systems, such as energy, financial or cyber security networks. For example, a malfunction of an AI system incorporated into military systems, such as nuclear command, control and communication infrastructure, might lead to catastrophic consequences. Use-based governance may be able to address this issue in part, by regulating the extent to which AI technology is permitted to be integrated into critical infrastructure at all – but such a form of control would not address the possibility of unexpected malfunction or loss of control of an AI system used in a permitted application.
Upstream development of AI
Given the possibility of dangerous malfunctions in advanced AI systems, a complementary approach would be to focus on the technology itself. Such an approach would entail structuring an international regime around controls on the upstream development of AI systems, rather than particularly harmful applications or uses.
International controls on upstream AI development could be structured in a number of ways. Controls could focus on security measures. They could include the introduction of mandatory information security or other protective requirements, to ensure that key components of advanced AI systems, such as model weights, cannot leak or be stolen by harmful actors or geopolitical rivals. The regime might also require the testing of AI systems against agreed safety metrics prior to release, with AI systems that fail prohibited from release until they can be demonstrated to be safe. Alternatively, international rules might focus on state jurisdiction compliance with agreed safety and oversight standards, rather than focusing on the safety of individual AI systems or training runs.
Controls could focus on increasing transparency or other confidence-building measures. States could introduce a mandatory warning system should AI models reach certain capability thresholds, or should there be an AI security incident. A regime might also include a requirement to notify other state parties – or the treaty body, if one was created – before beginning training of an advanced AI system, allowing states to convene and discuss precautionary measures or mitigations. Alternatively, the regime could require that other state parties or the treaty body give approval before advanced systems are trained.If robustly enforced, structuring controls around AI development would contribute significantly towards addressing the security risks posed by advanced AI systems. However, this approach to international governance also has its challenges. In particular, given that smaller AI systems are unlikely to pose significant risks, participants in any regime would likely need to also agree on thresholds for the introduction of controls – with these only applying to AI systems of a certain size or anticipated capability level. Provision may be needed to periodically update this threshold, in line with technological advances. In addition, given the benefits that advanced AI is expected to bring, an international regime controlling AI development would need to also include provision for the continued safe development of advanced AI systems above any capability threshold.
Industrial inputs: AI compute
Finally, a third approach to international governance would be for states to move another step back and focus on the AI supply chain. Supply-side controls of basic inputs have been successful in the past in addressing the challenges posed by advanced technology. An equivalent approach would involve structuring international controls around the industrial inputs necessary for the development of advanced AI systems, with a view to shaping the development of those systems.
The three principal inputs used to train AI systems are computing power, data and algorithms. Of these, computing power (“compute”) is the most viable node for control by states, and hence the focus of this section. This is because AI models are trained on physical semiconductor chips, that are by their nature quantifiable (they can be counted), detectable (they can be identified and physically tracked), and excludable (they can be restricted). The supply chain for AI chips is also exceptionally concentrated. These properties mean that controlling the distribution of AI compute would likely be technologically feasible – should states be able to agree on how to do so.
International agreements on the flow and usage of AI chips could assist in reducing the risks from advanced AI in a number of different ways. Binding rules around the flow of AI chips could be used to augment or enforce a wider international regime covering AI uses or development – for example by denying these chips to states who violate the regime or to non-participating states. Alternatively, international controls around AI industrial inputs might be used to directly shape the trajectory of AI development, through directing the flow of chips towards certain actors, potentially mitigating the need to control downstream uses or upstream development of AI systems at all. Future technological advances may also make it possible to monitor the use of individual semiconductor chips – which would be useful in verifying compliance with any binding international rules around the development of AI systems.
Export control law can provide the conceptual basis for international control of AI’s industrial inputs. The United States has already introduced a sweeping set of domestic laws controlling the export of semiconductors, with a view to restricting China’s ability to acquire the chips needed to develop advanced AI and to maintaining the U.S. technological advantage in this space. These U.S. controls could be used as the basis for an expanded international semiconductor export control regime, between the U.S. and its allies. Existing or historic multilateral export control regimes could also serve as a model for a future international agreement on AI compute exports. This includes the Cold War-era Coordinating Committee for Multilateral Export Controls (COCOM), under which Western states coordinated an arms embargo on Eastern Bloc countries, and its successor Wassenaar Arrangement, through which Western states harmonize controls on exports of conventional arms and dual-use items.
In order to be effective, controls on the export of physical AI chips would likely need to be augmented by restrictions on the proliferation of both AI systems themselves and of the technology necessary for the development of semiconductor manufacturing capability outside of participating states. Precedent for such a provision can be found in a number of international arms control agreements. For example, Article 1 of the Nuclear Non-Proliferation Treaty prohibits designated nuclear weapon states from transferring nuclear weapons or control over such weapons to any recipient, and from assisting, encouraging or inducing non-nuclear weapon states to manufacture or acquire the technology to do so. A similar provision controlling the exports of semiconductor design and manufacturing technology – perhaps again based on existing U.S. export controls – could be included in an international AI regime.
Structural challenges
A binding regime for governing advanced AI agreed upon by states incorporating any of the above controls would face a number of structural challenges.
Private sector actors
The first of these stems from the nature of the current wave of AI development. Unlike many of the Twentieth Century’s most significant AI advances, which were developed by governments or academia, the most powerful AI models today are almost exclusively designed in corporate labs, trained using private sector-produced chips, and run on commercial cloud data centers. While certain AI companies have experimented with corporate structures such as a long-term benefit trust or capped profit provision, commercial concerns are the major driver behind most of today’s AI advances – a situation that is likely to continue in the near future, pending significant government investment in AI capabilities.
As a result, a binding international regime aiming to control AI use or development would require a means of legally ensuring the compliance of private sector AI labs. This could be achieved through the imposition of obligations on participating state parties to implement the regime through domestic law. Alternatively the treaty instituting the regime could impose direct obligations on corporations – a less common approach in international law. However, even in such a situation the primary responsibility for enforcing the regime and remedying breaches would likely still fall on states.
Breadth of state participation
A further issue relates to the breadth of state participation in any binding international regime: should this be targeted or comprehensive? At present, the frontier of the AI industry is concentrated in a small number of countries. A minilateral agreement concluded between a limited group of states (such as between the U.S. and its allies) would almost certainly be easier to reach consensus on than a comprehensive global agreement. Given the pace of AI development, and concerns regarding the capabilities of the forthcoming generation of advanced models, there is significant reason to favor the establishment of a minimally viable international agreement, concluded as quickly as possible.
Nevertheless, a major drawback of a minilateral agreement conducted between a small group of states – in contrast to a comprehensive global agreement – would be the issue of legitimacy. Although AI development is currently concentrated in a small number of states, any harms that result from the misuse or malfunction of AI systems are unlikely to remain confined within the borders of those states. In addition, citizens of the Global South may be least likely to realize the economic benefits that result from AI technological advances. As such, there is a strong normative argument for giving a voice to a broad group of states in the design of any international regime intended to govern its development – not simply those that are currently most advanced in terms of AI capabilities. In the absence of this, any regime would likely suffer from a critical absence of global legitimacy, potentially threatening both its longevity and the likelihood of other states later agreeing to join.
A minilateral agreement aiming to institute binding international rules to govern AI would therefore need to include a number of provisions to address these legitimacy issues. First, while it may end up as more practicable to initially establish governance amongst a small group of states, it would greatly aid legitimacy if participants were to explicitly commit to working towards the establishment of a global regime, and open the regime for all states to theoretically join, provided they agreed to the controls and any enforcement mechanisms. Precedent for such a provision can be found in other international agreements – for example the 1990 Chemical Weapons Accord between the U.S. and the USSR, which included a pledge to work towards a global prohibition on chemical weapons, and eventually led to the establishment of the 1993 Chemical Weapons Convention which is open to all states to join.
Incentives and distribution
This brings us to incentives. In order to encourage broad participation in the regime, states with less developed artificial intelligence sectors may need to be offered inducements to join – particularly given that doing so might curtail their freedom to develop their own domestic AI capabilities. One way to do so would be to include a commitment from leading AI states to distribute the benefits of AI advances to less developed states, conditional on those participants committing to not violating the restrictive provisions of the agreement – a so-called ‘dual mandate.’
Inspiration for such an approach could be drawn from the Nuclear Non-Proliferation Treaty, under which non-nuclear weapon participants agree to forgo the right to develop nuclear weapons in exchange for the sharing of “equipment, materials and scientific and technological information for the peaceful uses of nuclear energy.” An equivalent provision under an AI governance regime might for example grant participating states the right to access the most advanced systems, for public sector or economic development purposes, and promise assistance in incorporating these systems into beneficial use cases.
The international governance of AI remains a nascent project. Whether binding international controls of any form come to be implemented in the near future will depend upon a range of variables and political conditions. This includes the direction of AI technological developments and the evolution of relations between leading AI states. As such, the feasibility of a binding international governance regime for AI remains to be seen. In light of 2024’s geopolitical tensions, and the traditional reticence from the U.S. and China to agree to international law restrictions that infringe on sovereignty or national security, binding international AI governance appears unlikely to be established immediately.
However, this position could rapidly change. Technological or geopolitical developments – such as a rapid and unexpected jump in AI capabilities, a shift in global politics, or an AI-related security incident or crisis with global impact – could act as forcing mechanisms leading states to come to support the introduction of international controls. In such a scenario, states will likely wish to implement these quickly, and will require guidance on both the form these controls should take and how they might be enacted.
Historical analogy suggests that international negotiations of equivalent magnitude to the challenges AI will pose typically take many years to conclude. It took over ten years from the initial UN discussions around international supervision of nuclear material for the statute of the International Atomic Energy Agency to be negotiated. In the case of AI, states will likely not have this long. Given the stakes at hand, lawyers and policymakers should therefore begin consideration both of the form that future international AI governance should take, and how this might be implemented, as a matter of urgency.