Commerce just proposed the most significant federal AI regulation to date – and no one noticed
A little more than a month ago, the Bureau of Industry and Security (“BIS”) proposed a rule that, if implemented, might just be the most significant U.S. AI regulation to date. The proposed rule has received relatively scant media attention as compared to more ambitious AI governance measures like California’s SB 1047 or the EU AI Act, to no one’s surprise—regulation is rarely an especially sexy topic, and the proposed rule is a dry, common-sense, procedural measure that doesn’t require labs to do much of anything besides send an e-mail or two to a government agency once every few months. But the proposed rule would allow BIS to collect a lot of important information about the most advanced AI models, and it’s a familiar fact of modern life that complex systems like companies and governments and large language models thrive on a diet of information.
This being the case, anyone who’s interested in what the U.S. government’s approach to frontier AI regulation is likely to look like would probably be well-served by a bit of context about the rule and its significance. If that’s you, then read on.
What does the proposed rule do?
Essentially, the proposed rule would allow BIS to collect information on a regular basis about the most advanced AI models, which the proposed rule calls “dual-use foundation models.”[ref 1] The rule provides that any U.S. company that plans to conduct a sufficiently large AI model training run[ref 2] within the next six months must report that fact to BIS on a quarterly basis (i.e., once every three months, by specified reporting deadlines). Companies that plan to build or acquire sufficiently large computing clusters for AI training are similarly required to notify BIS.
Once a company has notified BIS of qualifying plans or activities, the proposed rule states that BIS will send the company a set of questions, which must be answered within 30 days. BIS can also send companies additional “clarification questions” after receiving the initial answers, and these clarification questions must be answered within 7 days.
The proposed rule includes a few broad categories of information that BIS will certainly collect. For instance, BIS is required under the rule to ask companies to report the results of any red-teaming safety exercises conducted and the physical and cybersecurity measures taken to protect model weights. Importantly, however, the proposed rule would not limit BIS to asking these questions—instead, it provides that BIS questions “may not be limited to” the listed topics. In other words, the proposed rule would provide BIS with extremely broad and flexible information-gathering capabilities.
Why does the proposed rule matter?
The NPRM doesn’t come as a surprise—observers have been expecting something like it for a while, because President Biden ordered the Department of Commerce to implement reporting requirements for “dual-use foundation models” in § 4.2(a) of Executive Order 14110. Also, BIS previously sent out an initial one-off survey to selected AI companies in January 2024, collecting information similar to the information that will be collected on a more regular basis under the new rule.
But while the new proposed rule isn’t unexpected, it is significant. AI governance researchers have emphasized the importance of reporting requirements, writing of a “growing consensus among experts in AI safety and governance that reporting safety information to trusted actors in government and industry is key” for responding to “emerging risks presented by frontier AI systems.” And most of the more ambitious regulatory frameworks for frontier AI systems that have been proposed or theorized would require the government to collect and process safety-relevant information. Doing this effectively—figuring out what information needs to be collected and what the collected information means—will require institutional knowledge and experience, and collecting safety information under the proposed rule will allow BIS to cultivate that knowledge and experience internally. In short, the proposed rule is an important first step in the regulation of frontier models.
Labs already voluntarily share some safety information with the government, but these voluntary commitments have been criticized as “vague, sensible-sounding pledge[s] with lots of wiggle room,” and are not enforceable. In short, voluntary commitments obligate companies only to share whatever information they want to share, whenever they want to share it. The proposed rule, on the other hand, would be legally enforceable, with potential civil and criminal penalties for noncompliance, and would allow BIS to choose what information to collect.
Pushback and controversy
Like other recent attempts to regulate frontier AI developers, the proposed rule has attracted some amount of controversy. However, the recently published public comments on the rule seem to indicate that the rule is unlikely to be challenged in court—and that, unless the next presidential administration decides to change course and scrap the proposed rule, reporting requirements for dual-use foundation models are here to stay.
The proposed rule and the Defense Production Act
As an executive-branch agency, BIS typically only has the legal authority to issue regulations if some law passed by Congress authorizes the kind of regulation contemplated. According to BIS, congressional authority for the proposed rule comes from § 705 of the Defense Production Act (“DPA”).
The DPA is a law that authorizes the President to take a broad range of actions in service of “the national defense.” The DPA was initially enacted during the Korean War and used solely for purposes related to defense industry production. Since then, Congress has renewed the DPA a number of times and has significantly expanded the statute’s definition of “national defense” to include topics such as “critical infrastructure protection and restoration,” “homeland security,” “energy production,” and “space.”
Section 705 of the DPA authorizes the President to pass regulations and conduct industry surveys to “obtain such information… as may be necessary or appropriate, in his discretion, to the enforcement or administration of [the DPA].” While § 705 is very broadly worded, and on its face appears to give the President a great deal of discretionary authority to collect all kinds of information, it has historically been used primarily to authorize one-off “industrial base assessment” surveys of defense-relevant industries. These assessments have typically been time-bounded efforts to analyze the state of a specified industry that result in long “assessment” documents. Interestingly enough, BIS has actually conducted an assessment of the artificial intelligence industry once before—in 1994.[ref 3]
Unlike past industrial base assessments, the proposed rule would allow the federal government to collect information from industry actors on an ongoing basis, indefinitely. This means that the kind of information BIS requests and the purposes it uses that information for may change over time in response to advances in AI capabilities and in efforts to understand and evaluate AI systems. And unlike past assessment surveys, the rule’s purpose is not simply to aid in the preparation of a single snapshot assessment of the industry. Instead, BIS intends to use the information it collects to “ensure that the U.S. Government has the most accurate, up-to-date information when making policy decisions” about AI and the national defense.
Legal and policy objections to reporting requirements under Executive Order 14110
After Executive Order 14110 was issued in October 2023, one of the most common criticisms of the more-than-100-page order was that its reliance on the DPA to justify reporting requirements was unlawful. This criticism was repeated by a number of prominent Republican elected officials in the months following the executive order’s publication in October 2023, and the prospect of a lawsuit challenging the legality of reporting requirements under the executive order was widely discussed. But while these criticisms were based on legitimate and understandable concerns about the separation of powers and the scope of executive-branch authority, they were not legally sound. Ultimately, any lawsuit challenging the proposed rule would likely need to be filed by the leading AI labs who are subject to the rule’s requirements, and none of those labs seem inclined to raise the kind of fundamental objections to the rule’s legality that early reactions to the executive order contemplated.
The basic idea behind the criticisms of the executive order was that it used the DPA in a novel way, to do something not obviously related to the industrial production of military materiel. To some skeptics of the Biden administration, or observers generally concerned about the concentration of political power in the executive branch, the executive order looked like an attempt to use emergency wartime powers in peacetime to increase the government’s control over private industry. The public comment[ref 4] on BIS’s proposed rule by the Americans for Prosperity Foundation (“AFP”), a libertarian advocacy group, is a representative articulation of this perspective. AFP argues that the DPA is an “emergency” statute that should not be used in non-emergencies for purposes not directly related to defense industry production.
This kind of concern about peacetime abuses of DPA authority is not new. President George H.W. Bush, after signing a bill reauthorizing the DPA in 1992, remarked that using § 705 during peacetime to collect industrial base data from American companies would “intrude inappropriately into the lives of Americans who own and work in the Nation’s businesses.” And former federal judge Jamie Baker, in an excellent paper from 2021 on the DPA’s potential as an AI governance tool, predicted that the use of § 705 to collect information about AI to collect information from “private companies engaged in AI research” would meet with “challenge and controversy.”
Still, to quote from Judge Baker’s piece again, “Section 705 is clearly written and the authority it presents is strong.” Nothing in the DPA indicates that industrial base surveys under § 705 cannot be continuously ongoing, or that the DPA generally can only be used for encouraging increased defense industry production. It’s true that § 705, and related regulations, both focus on gathering information about the capacity of the U.S. industrial base to support “the national defense”—but recall that the DPA defines the term “national defense” very broadly, to include a wide variety of non-military considerations such as critical infrastructure protection. Moreover, the DPA generally has been used for purposes not directly related to defense industry production by Presidents from both parties for decades. For example, DPA authorities have been used to supply California with natural gas during the 2000-2001 energy crisis and to block corporate acquisitions that would have given Chinese companies ownership interests in U.S. semiconductor companies. In short, while critics of the proposed rule can reasonably argue that using the DPA in novel ways to collect information from private AI companies is bad policy, and politically undesirable, it’s much harder to make a reasonable argument against the legality of the proposed rule.
Also, government access to up-to-date information about frontier models may be more important to national security, and even to military preparedness specifically, than the rule’s critics anticipate. A significant portion of the Notice of Proposed Rulemaking in which BIS introduced the proposed rule is devoted to justifying the importance of the rule to “the national defense” and “the defense industrial base.” According to BIS, integrating dual-use foundation models into “military equipment, signal intelligence devices, and cybersecurity software” could soon become important to the national defense. Therefore, BIS claims, the government needs access to information from developers both to determine whether government action to stimulate further dual-use foundation model development is needed and “to ensure that dual-use foundation models operate in a safe and reliable manner.”
In any event, any lawsuit challenging the proposed rule would probably have to be brought by one of the labs subject to the reporting requirements.[ref 5] A few leading AI labs have submitted public comments on the rule, but none expressed any objection to the basic concept of an ongoing system of mandatory reporting requirements for dual-use foundation model developers. Anthropic’s comment only requests that the reporting requirements should be semiannual rather than quarterly, that labs should have more time to respond to questions, and that BIS should tweak some of the definitions in the proposed rule and take steps to ensure that the sensitive information contained in labs’ responses is handled securely. OpenAI’s comment goes a bit further, asking (among other requests) that BIS limit itself to collecting only “standardized” information relevant to national security concerns and to using information collected “for the sole purpose to ensure [sic] and verify the continuous availability of safe, reliable, and effective AI.” But neither those labs nor any of their competitors has voiced any fundamental objection to the basic idea of mandatory reporting requirements that allow the government to collect safety information about dual-use foundation models. This is unsurprising given that these and other leading AI companies have already committed to voluntarily sharing similar information with the US and other governments. In other words, while it’s too soon to be certain, it looks like the reporting requirements are unlikely to be challenged in court for the time being.
Conclusion
“Information,” according to LawAI affiliate Noam Kolt and his distinguished co-authors, “is the lifeblood of good governance.” The field of AI governance is still in its infancy, and at times it seems like there’s near-universal agreement on the need for the federal government to do something and near-universal disagreement about what exactly that something should be. Establishing a flexible system for gathering information about the most capable models, and building up the government’s capacity for collecting and processing that information in a secure and intelligent way, seems like a good first step. The regulated parties, who have voluntarily committed to sharing certain information with the government and have largely chosen not to object to the idea of ongoing information-gathering by BIS, seem to agree. In an ideal world, Congress would pass a law explicitly authorizing such a system; maybe someday it will. In the meantime, it seems likely that BIS will implement some amended version of its proposed rule in the near future, and that the result will, for better or worse, be the most significant federal AI regulation to date.
Last edited on: October 30, 2024
The limits of liability
I’m probably as optimistic as anyone about the role that liability can play in AI governance. Indeed, as I’ll argue in a forthcoming article, I think it should be the centerpiece of our AI governance regime. But it’s important to recognize its limits.
First and foremost, liability alone is not an effective tool for solving public good problems. This means it is poorly positioned to address at least some challenges presented by advanced AI. Liability is principally a tool for addressing risk externalities generated by training and deploying advanced AI systems. That is, AI developers and their customers largely capture the benefits of increasing AI capabilities, but most of the risk is borne by third parties who have no choice in the matter. This is the primary market failure associated with AI risk, but it’s not the only one. There is also a public good problem with AI alignment and safety research. Like most information goods, advances in alignment and safety research are non-rival (you and I can both use the same idea, without leaving less for the other) and non-excludable (once you come up with an idea, it’s hard to use it without the secret getting out). Markets generally underprovide public goods, and AI safety research is no exception. Plausible policy interventions to address this problem include prizes and other forms of public subsidies. Private philanthropy can also continue to play an important role in supporting alignment and safety research. There may also be winner-take-all race dynamics that generate market distortions not fully captured by the risk externality and public goods problems.
Second, there are some plausible AI risk externalities that liability cannot realistically address, especially those involving structural harms or highly attenuated causal chains. For instance, if AI systems are used to spread misinformation or interfere with elections, this is unlikely to give rise to a liability claim. To the extent that AI raises novel issues in those domains, other policy ideas may be needed. Similarly, some ways of contributing to the risk of harm are too attenuated to trigger liability claims. For example, if the developer of a frontier or near-frontier model releases information about the model and its training data/process that enables lagging labs to move closer to the frontier, this could induce leading labs to move faster and exercise less caution. But it would not be appropriate or feasible to use liability tools to hold the first lab responsible for the downstream harms from this race dynamic.
Liability also has trouble handling uninsurable risks— those that might cause harms so large that a compensatory damages award would not be practically enforceable — if warning shots are unlikely. In my recent paper laying out a tort liability framework for mitigating catastrophic AI risk, I argue that uninsurable risks more broadly can be addressed using liability by applying punitive damages in “near miss” cases of practically compensable harm that are associated with the uninsurable risk. But if some uninsurable risks are unlikely to produce warning shots, then this indirect liability mechanism would not work to mitigate them. And if the uninsurable risk is realized, the harm would be too large to make a compensatory damages judgment practically enforceable. That means AI developers and deployers would have inadequate incentives to mitigate those risks.
Like most forms of domestic AI regulation, unilateral imposition of a strong liability framework is also subject to regulatory arbitrage. If the liability framework is sufficiently binding, AI development may shift to jurisdictions that don’t impose strong liability policies or comparably onerous regulations. While foreign AI developers would still be subject to liability if they harm people in countries with strong liability regimes, it may prove difficult to enforce those judgments if the developer lacks substantial assets in the country where the injuries occur. One potential solution to this problem is international treaties establishing reciprocal enforcement of liability judgments reached by the other country’s courts.
Finally, liability is a weak tool for influencing the conduct of governmental actors. By default, many governments will be shielded from liability, and many legislative proposals will continue to exempt government entities. Even if governments waive sovereign immunity for AI harms they are responsible for, the prospect of liability is unlikely to sway the decisions of government officials, who are more responsive to political than economic incentives. This means liability is a weak tool in scenarios where the major AI labs get nationalized as the technology gets more powerful. But even if AI research and development remains largely in the private sector, the use of AI by government officials will be poorly constrained by liability. Ideas like law-following AI are likely to be needed to constrain governmental AI deployment.
International law and advanced AI: exploring the levers for ‘hard’ control
The question of how artificial intelligence (AI) is to be governed has risen rapidly up the global agenda – and in July 2023, United Nations Secretary-General António Guterres raised the possibility of the “creation of a new global body to mitigate the peace and security risks of AI.” While the past year has seen the emergence of multiple initiatives for AI’s international governance – by states, international organizations and within the UN system – most of these remain in the realm of non-binding ‘soft law.’ However, many influential voices in the debate are increasingly arguing that the challenge of future AI systems means that international AI governance would eventually need to include elements that are legally binding.
If and when states choose to take up this challenge and institute binding international rules on advanced AI – either under a comprehensive global agreement, or between a small group of allied states – there are three principal areas where such controls might usefully bite. First, states might agree to controls on particular end uses of AI that are considered most risky or harmful, drawing on the European Union’s new AI Act as a general model. Second, controls might be introduced on the technology itself, structured around the development of certain types of AI systems, irrespective of use – taking inspiration from arms control regimes and other international attempts to control or set rules around certain forms of scientific research. Third, states might seek to control the production and dissemination of the industrial inputs that power AI systems – principally the computing power that drives AI development – harmonizing export controls and other tools of economic statecraft.
Ahead of the upcoming United Nations Summit of the Future and the French-hosted international AI summit in 2025, this post explores these three possible control points and the relative benefits of each in addressing the challenges posed by advanced AI. It also addresses the structural questions and challenges that any binding regime would need to address – including its breadth in terms of state participation, how participation might be incentivized, the role that private sector AI labs might play, and the means by which equitable distribution of AI’s benefits could be enabled. This post is informed by ongoing research projects into the future of AI international governance undertaken by the Institute for Law & AI, Lawfare’s Legal AI Safety Initiative, and others.
Hard law approaches to AI governance
The capabilities of AI systems have advanced rapidly over the past decade. While these systems present significant opportunities for societal benefit, they also engender new risks and challenges. Possible risks from the next wave of general-purpose foundation models, deemed “frontier” or “advanced AI,” include increases in inequality, misuse by harmful actors, and dangerous malfunctions. Moreover, AI agents that are able to make and execute long-term plans may soon proliferate, and would pose particular challenges.
As a result of these developments, states are beginning to take concrete steps to regulate AI at the domestic level. This includes the United States’ Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, the European Union’s AI Act, the UK’s AI White Paper and subsequent public consultation, and Chinese laws covering both the development and use of various AI systems. At the same time, given the rapid pace of change and cross-border nature of AI development and potential harms, it is increasingly recognized that domestic regulation alone will likely not be adequate to address the full spread of challenges that advanced AI systems pose.
As a result, recent years have also witnessed the emergence of a growing number of initiatives for international coordination of AI policy. In the twenty months since the launch of OpenAI’s ChatGPT propelled AI to the top of the policy agenda, we have seen two international summits on AI safety; the Council of Europe conclude its Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law; the G7 launch its Hiroshima Process on responsible AI governance; and the UN launch an Advisory Body on international AI governance.
These ongoing initiatives are unlikely to represent the limits of states’ ambitions for AI coordination on the international plane. Indeed, should the pace of AI capability development continue as it has over the last decade, it seems likely that in the coming years states may choose to pursue some form of binding ‘hard law’ international governance for AI – moving beyond the mostly soft law commitments that have characterized today’s diplomatic efforts. Geopolitical developments, a rapid jump in AI capabilities, or a significant AI security incident or crisis, might also lead states to come to support a hard law approach. Throughout the course of 2023, several influential participants in the debate began to raise the possibility that binding international governance may be necessary, once AI systems reach a certain capability level – including most notably AI lab OpenAI. A number of political and moral authorities have gone beyond this and called for the immediate institution of binding international controls on AI – including the influential group of former politicians The Elders who have called for an “international treaty establishing a new international AI safety agency,” and Pope Francis who has urged the global community to adopt a “binding international treaty that regulates the development and use of artificial intelligence in its many forms.”
To date these calls for binding international governance have only been made at a high level of abstraction, without inclusion of detailed proposals for how a binding international AI governance regime might be structured or what activities should be controlled. Moreover, the advanced state of the different soft law approaches currently in progress mean that the design and legal form of any hard law regime that is eventually instituted would be heavily conditioned by other AI governance initiatives or institutions that precede it. Nevertheless, given the significant possibility of states beginning discussion of binding AI governance in the coming years, there is value in surveying the areas where controls could be implemented, assessing the contribution these controls might make in addressing the challenges of AI, and identifying the relevant institutional antecedents.
Three control points
There are three main areas where binding international controls on AI might bite: on particular ‘downstream’ uses of AI, on the upstream ‘development’ of AI systems, and on the industrial inputs that underpin the development of AI systems.
Downstream uses of AI
If the primary motivation behind states introducing international controls is a desire to mitigate the perceived risks from advanced AI, then the most natural approach would be to structure those controls around the particular AI uses that are considered to pose the greatest level of risk. The most prominent domestic AI regulation – the European Union’s AI Act – follows this approach, introducing different tiers of control for uses of AI systems based around the perceived risk of those use cases. Those that are deemed most harmful – for example the use of AI for social-scoring or in biometric systems put in place to predict criminality – are prohibited outright.
This form of control could be replicated at an international level. Existing international law imposes significant constraints on certain uses of AI – such as the protections provided by international human rights law and international humanitarian law. However, explicitly identifying and controlling particular harmful AI uses would add an additional layer of granularity to these constraints. Should states wish to do so, arms control agreements offer one model for how this could be done.
The principal benefit of a use-based approach to international control of AI is its simplicity: where particular AI uses are most harmful, they can be controlled or prohibited. States should in theory also be able to update any new treaty regime, adding additional harmful uses of AI to a controlled list should they wish to do so – and if they are able to agree on these. Nevertheless, structuring international controls solely around identified harmful uses of AI also has certain limitations. Most importantly, while such a use-based governance regime would have a significant impact in addressing the risks posed by the deliberate misuse of AI, its impact in reducing other forms of AI risk is less clear.
As reported by the 2024 International Scientific Report on the Safety of Advanced AI, advanced AI systems may also pose risks stemming from the potential malfunction of those systems – regardless of their particular application or form of use. The “hallucinations” generated by the most advanced chatbots, in spite of their developers best intentions, are an early example of this. At the extreme, certain researchers have posited that developers might lose the ability to control the most advanced systems. The malfunction or loss of control of more advanced systems could have severe implications as these systems are increasingly incorporated into critical infrastructure systems, such as energy, financial or cyber security networks. For example, a malfunction of an AI system incorporated into military systems, such as nuclear command, control and communication infrastructure, might lead to catastrophic consequences. Use-based governance may be able to address this issue in part, by regulating the extent to which AI technology is permitted to be integrated into critical infrastructure at all – but such a form of control would not address the possibility of unexpected malfunction or loss of control of an AI system used in a permitted application.
Upstream development of AI
Given the possibility of dangerous malfunctions in advanced AI systems, a complementary approach would be to focus on the technology itself. Such an approach would entail structuring an international regime around controls on the upstream development of AI systems, rather than particularly harmful applications or uses.
International controls on upstream AI development could be structured in a number of ways. Controls could focus on security measures. They could include the introduction of mandatory information security or other protective requirements, to ensure that key components of advanced AI systems, such as model weights, cannot leak or be stolen by harmful actors or geopolitical rivals. The regime might also require the testing of AI systems against agreed safety metrics prior to release, with AI systems that fail prohibited from release until they can be demonstrated to be safe. Alternatively, international rules might focus on state jurisdiction compliance with agreed safety and oversight standards, rather than focusing on the safety of individual AI systems or training runs.
Controls could focus on increasing transparency or other confidence-building measures. States could introduce a mandatory warning system should AI models reach certain capability thresholds, or should there be an AI security incident. A regime might also include a requirement to notify other state parties – or the treaty body, if one was created – before beginning training of an advanced AI system, allowing states to convene and discuss precautionary measures or mitigations. Alternatively, the regime could require that other state parties or the treaty body give approval before advanced systems are trained.If robustly enforced, structuring controls around AI development would contribute significantly towards addressing the security risks posed by advanced AI systems. However, this approach to international governance also has its challenges. In particular, given that smaller AI systems are unlikely to pose significant risks, participants in any regime would likely need to also agree on thresholds for the introduction of controls – with these only applying to AI systems of a certain size or anticipated capability level. Provision may be needed to periodically update this threshold, in line with technological advances. In addition, given the benefits that advanced AI is expected to bring, an international regime controlling AI development would need to also include provision for the continued safe development of advanced AI systems above any capability threshold.
Industrial inputs: AI compute
Finally, a third approach to international governance would be for states to move another step back and focus on the AI supply chain. Supply-side controls of basic inputs have been successful in the past in addressing the challenges posed by advanced technology. An equivalent approach would involve structuring international controls around the industrial inputs necessary for the development of advanced AI systems, with a view to shaping the development of those systems.
The three principal inputs used to train AI systems are computing power, data and algorithms. Of these, computing power (“compute”) is the most viable node for control by states, and hence the focus of this section. This is because AI models are trained on physical semiconductor chips, that are by their nature quantifiable (they can be counted), detectable (they can be identified and physically tracked), and excludable (they can be restricted). The supply chain for AI chips is also exceptionally concentrated. These properties mean that controlling the distribution of AI compute would likely be technologically feasible – should states be able to agree on how to do so.
International agreements on the flow and usage of AI chips could assist in reducing the risks from advanced AI in a number of different ways. Binding rules around the flow of AI chips could be used to augment or enforce a wider international regime covering AI uses or development – for example by denying these chips to states who violate the regime or to non-participating states. Alternatively, international controls around AI industrial inputs might be used to directly shape the trajectory of AI development, through directing the flow of chips towards certain actors, potentially mitigating the need to control downstream uses or upstream development of AI systems at all. Future technological advances may also make it possible to monitor the use of individual semiconductor chips – which would be useful in verifying compliance with any binding international rules around the development of AI systems.
Export control law can provide the conceptual basis for international control of AI’s industrial inputs. The United States has already introduced a sweeping set of domestic laws controlling the export of semiconductors, with a view to restricting China’s ability to acquire the chips needed to develop advanced AI and to maintaining the U.S. technological advantage in this space. These U.S. controls could be used as the basis for an expanded international semiconductor export control regime, between the U.S. and its allies. Existing or historic multilateral export control regimes could also serve as a model for a future international agreement on AI compute exports. This includes the Cold War-era Coordinating Committee for Multilateral Export Controls (COCOM), under which Western states coordinated an arms embargo on Eastern Bloc countries, and its successor Wassenaar Arrangement, through which Western states harmonize controls on exports of conventional arms and dual-use items.
In order to be effective, controls on the export of physical AI chips would likely need to be augmented by restrictions on the proliferation of both AI systems themselves and of the technology necessary for the development of semiconductor manufacturing capability outside of participating states. Precedent for such a provision can be found in a number of international arms control agreements. For example, Article 1 of the Nuclear Non-Proliferation Treaty prohibits designated nuclear weapon states from transferring nuclear weapons or control over such weapons to any recipient, and from assisting, encouraging or inducing non-nuclear weapon states to manufacture or acquire the technology to do so. A similar provision controlling the exports of semiconductor design and manufacturing technology – perhaps again based on existing U.S. export controls – could be included in an international AI regime.
Structural challenges
A binding regime for governing advanced AI agreed upon by states incorporating any of the above controls would face a number of structural challenges.
Private sector actors
The first of these stems from the nature of the current wave of AI development. Unlike many of the Twentieth Century’s most significant AI advances, which were developed by governments or academia, the most powerful AI models today are almost exclusively designed in corporate labs, trained using private sector-produced chips, and run on commercial cloud data centers. While certain AI companies have experimented with corporate structures such as a long-term benefit trust or capped profit provision, commercial concerns are the major driver behind most of today’s AI advances – a situation that is likely to continue in the near future, pending significant government investment in AI capabilities.
As a result, a binding international regime aiming to control AI use or development would require a means of legally ensuring the compliance of private sector AI labs. This could be achieved through the imposition of obligations on participating state parties to implement the regime through domestic law. Alternatively the treaty instituting the regime could impose direct obligations on corporations – a less common approach in international law. However, even in such a situation the primary responsibility for enforcing the regime and remedying breaches would likely still fall on states.
Breadth of state participation
A further issue relates to the breadth of state participation in any binding international regime: should this be targeted or comprehensive? At present, the frontier of the AI industry is concentrated in a small number of countries. A minilateral agreement concluded between a limited group of states (such as between the U.S. and its allies) would almost certainly be easier to reach consensus on than a comprehensive global agreement. Given the pace of AI development, and concerns regarding the capabilities of the forthcoming generation of advanced models, there is significant reason to favor the establishment of a minimally viable international agreement, concluded as quickly as possible.
Nevertheless, a major drawback of a minilateral agreement conducted between a small group of states – in contrast to a comprehensive global agreement – would be the issue of legitimacy. Although AI development is currently concentrated in a small number of states, any harms that result from the misuse or malfunction of AI systems are unlikely to remain confined within the borders of those states. In addition, citizens of the Global South may be least likely to realize the economic benefits that result from AI technological advances. As such, there is a strong normative argument for giving a voice to a broad group of states in the design of any international regime intended to govern its development – not simply those that are currently most advanced in terms of AI capabilities. In the absence of this, any regime would likely suffer from a critical absence of global legitimacy, potentially threatening both its longevity and the likelihood of other states later agreeing to join.
A minilateral agreement aiming to institute binding international rules to govern AI would therefore need to include a number of provisions to address these legitimacy issues. First, while it may end up as more practicable to initially establish governance amongst a small group of states, it would greatly aid legitimacy if participants were to explicitly commit to working towards the establishment of a global regime, and open the regime for all states to theoretically join, provided they agreed to the controls and any enforcement mechanisms. Precedent for such a provision can be found in other international agreements – for example the 1990 Chemical Weapons Accord between the U.S. and the USSR, which included a pledge to work towards a global prohibition on chemical weapons, and eventually led to the establishment of the 1993 Chemical Weapons Convention which is open to all states to join.
Incentives and distribution
This brings us to incentives. In order to encourage broad participation in the regime, states with less developed artificial intelligence sectors may need to be offered inducements to join – particularly given that doing so might curtail their freedom to develop their own domestic AI capabilities. One way to do so would be to include a commitment from leading AI states to distribute the benefits of AI advances to less developed states, conditional on those participants committing to not violating the restrictive provisions of the agreement – a so-called ‘dual mandate.’
Inspiration for such an approach could be drawn from the Nuclear Non-Proliferation Treaty, under which non-nuclear weapon participants agree to forgo the right to develop nuclear weapons in exchange for the sharing of “equipment, materials and scientific and technological information for the peaceful uses of nuclear energy.” An equivalent provision under an AI governance regime might for example grant participating states the right to access the most advanced systems, for public sector or economic development purposes, and promise assistance in incorporating these systems into beneficial use cases.
The international governance of AI remains a nascent project. Whether binding international controls of any form come to be implemented in the near future will depend upon a range of variables and political conditions. This includes the direction of AI technological developments and the evolution of relations between leading AI states. As such, the feasibility of a binding international governance regime for AI remains to be seen. In light of 2024’s geopolitical tensions, and the traditional reticence from the U.S. and China to agree to international law restrictions that infringe on sovereignty or national security, binding international AI governance appears unlikely to be established immediately.
However, this position could rapidly change. Technological or geopolitical developments – such as a rapid and unexpected jump in AI capabilities, a shift in global politics, or an AI-related security incident or crisis with global impact – could act as forcing mechanisms leading states to come to support the introduction of international controls. In such a scenario, states will likely wish to implement these quickly, and will require guidance on both the form these controls should take and how they might be enacted.
Historical analogy suggests that international negotiations of equivalent magnitude to the challenges AI will pose typically take many years to conclude. It took over ten years from the initial UN discussions around international supervision of nuclear material for the statute of the International Atomic Energy Agency to be negotiated. In the case of AI, states will likely not have this long. Given the stakes at hand, lawyers and policymakers should therefore begin consideration both of the form that future international AI governance should take, and how this might be implemented, as a matter of urgency.
What might the end of Chevron deference mean for AI governance?
In January of this year, the Supreme Court heard oral argument in two cases—Relentless, Inc. v. Department of Commerce and Loper Bright Enterprises, Inc. v. Raimondo—that will decide the fate of a longstanding legal doctrine known as “Chevron deference.” During the argument, Justice Elena Kagan spoke at some length about her concern that eliminating Chevron deference would impact the U.S. federal government’s ability to “capture the opportunities, but also meet the challenges” presented by advances in Artificial Intelligence (AI) technology.
Eliminating Chevron deference would dramatically impact the ability of federal agencies to regulate in a number of important areas, from health care to immigration to environmental protection. But Justice Kagan chose to focus on AI for a reason. In addition to being a hot topic in government at the moment—more than 80 items of AI-related legislation have been proposed in the current Session of the U.S. Congress—AI governance could prove to be an area where the end of Chevron deference will be particularly impactful.
The Supreme Court will issue a decision in Relentless and Loper Bright at some point before the end of June 2024. Most commentators expect the Court’s conservative majority to eliminate (or at least to significantly weaken) Chevron deference, notwithstanding the objections of Justice Kagan and the other two members of the Court’s liberal minority. But despite the potential significance of this change, relatively little has been written about what it means for the future of AI governance. Accordingly, this blog post offers a brief overview of what Chevron deference is and what its elimination might mean for AI governance efforts.
What is Chevron deference?
Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc. is a 1984 Supreme Court case in which the Court laid out a framework for evaluating agency regulations interpreting federal statutes (i.e., laws). Under Chevron, federal courts defer to agency interpretations when: (1) the relevant part of the statute being interpreted is genuinely ambiguous, and (2) the agency’s interpretation is reasonable.
As an example of how this deference works in practice, consider the case National Electrical Manufacturers Association v. Department of Energy. There, a trade association of electronics manufacturers (NEMA) challenged a Department of Energy (DOE) regulation that imposed energy conservation standards on electric induction motors with power outputs between 0.25 and 3 horsepower. The DOE claimed that this regulation was authorized by a statute that empowered the DOE to create energy conservation standards for “small electric motors.” NEMA argued that motors with between 1 and 3 horsepower were too powerful to be “small electric motors” and that the DOE was therefore exceeding its statutory authority by attempting to regulate them. A federal court considered the language of the statute and concluded that the statute was ambiguous as to whether 1-3 horsepower motors could be “small electric motors.” The court also found that the DOE’s interpretation of the statute was reasonable. Therefore, the court deferred to the DOE’s interpretation under Chevron and the challenged regulation was upheld.
What effect would overturning Chevron have on AI governance efforts?
Consider the electric motor case discussed above. In a world without Chevron deference, the question considered by the court would have been “does the best interpretation of the statute allow DOE to regulate 1-3 horsepower motors?” rather than “is the DOE’s interpretation of this statute reasonable?” Under the new standard, lawsuits like NEMA’s would probably be more likely to succeed than they have been in recent decades under Chevron.
Eliminating Chevron would essentially take some amount of interpretive authority away from federal agencies and transfer it to federal courts. This would make it easier for litigants to successfully challenge agency actions, and could also have a chilling effect on agencies’ willingness to adopt potentially controversial interpretations. Simply put, no Chevron means fewer and less aggressive regulations. To libertarian-minded observers like Justice Neil Gorsuch, who has been strongly critical of the modern administrative state, this would be a welcome change—less regulation would mean smaller government, increased economic growth, and more individual freedom.[ref 1] Those who favor a laissez-faire approach to AI governance, therefore, should welcome the end of Chevron. Many commentators, however, have suggested that a robust federal regulatory response is necessary to safely develop advanced AI systems without creating unacceptable risks. Those who subscribe to this view would probably share Justice Kagan’s concern that degrading the federal government’s regulatory capacity will seriously impede AI governance efforts.
Furthermore, AI governance may be more susceptible to the potential negative effects of Chevron repeal than other areas of regulation. Under current law, the degree of deference accorded to agency interpretations “is particularly great where … the issues involve a high level of technical expertise in an area of rapidly changing technological and competitive circumstances.”[ref 2] This is because the regulation of emerging technologies is an area where two of the most important policy justifications for Chevron deference are at their most salient. Agencies, according to Chevron’s proponents, are (a) better than judges at marshaling deep subject matter expertise and hands-on experience, and (b) better than Congress at responding quickly and flexibly to changed circumstances. These considerations are particularly important for AI governance because AI is, in some ways, particularly poorly understood and unusually prone to manifesting unexpected capabilities and behaving in unexpected ways even in comparison to other emerging technologies.
Overturning Chevron would also make it more difficult for agencies to regulate AI under existing authorities by issuing new rules based on old statutes. The Federal Trade Commission, for example, does not necessarily need additional authorization to issue regulations intended to protect consumers from harms such as deceptive advertising using AI. It already has some authority to issue such regulations under § 5 of the FTC Act, which authorizes the FTC to issue regulations aimed at preventing “unfair or deceptive acts or practices in or affecting commerce.” But disputes will inevitably arise, as they often have in the past, over the exact meaning of statutory language like “unfair or deceptive acts or practices” and “in or affecting commerce.” This is especially likely to happen when old statutes (the “unfair or deceptive acts or practices” language in the FTC Act dates from 1938) are leveraged to regulate technologies that could not possibly have been foreseen when the statutes were drafted. Statutes that predate the technologies to which they are applied will necessarily be full of gaps and ambiguities, and in the past Chevron deference has allowed agencies to regulate more or less effectively by filling in those gaps. If Chevron is overturned, challenges to this kind of regulation will be more likely to succeed.
If Chevron is overturned, agency interpretations will still be entitled to a weaker form of deference known as Skidmore deference, after the 1944 Supreme Court case Skidmore v. Swift & Co. Skidmore requires courts give respectful consideration to an agency’s interpretation, taking into account the agency’s expertise and knowledge of the policy context surrounding the statute. But Skidmore deference is not really deference at all; agency interpretations under Skidmore influence a court’s decision only to the extent that they are persuasive. In other words, replacing Chevron with Skidmore would require courts only to consider the agency’s interpretation along with other arguments and authorities raised by the parties to a lawsuit in the course of choosing the best interpretation of a statute.
How can legislators respond to the elimination of Chevron?
Chevron deference was not originally created by Congress—rather, it was created by the Supreme Court in 1984. This means that Congress could probably[ref 3] codify Chevron into law, if the political will to do so existed. However, past attempts to codify Chevron have mostly failed, and the difficulty of enacting controversial new legislation in the current era of partisan gridlock makes codifying Chevron an unlikely prospect in the short term.
However, codifying Chevron as a universal principle of judicial interpretation is not the only option. Congress can alternatively codify Chevron on a narrower basis, by including, in individual laws for which Chevron deference would be particularly useful, provisions directing courts to defer to specified agencies’ reasonable interpretations of specified statutory provisions. This approach could address Justice Kagan’s concerns about the desirability of flexible rulemaking in highly technical and rapidly evolving regulatory areas while also making concessions to conservative concerns about the constitutional legitimacy of the modern administrative state.
While codifying Chevron could be controversial, there are also some uncontroversial steps that legislators can take to shore up new legislation against post-Chevron legal challenges. Conservative and liberal jurists agree that statutes can legitimately confer discretion on agencies to choose between different available policy options. So, returning to the small electric motor example discussed above, a statute that explicitly granted the DOE broad discretion to define “small electric motor” in accordance with the DOE’s policy judgment about what motors should be regulated would effectively confer discretion. The same would be true for, e.g., a law authorizing the Department of Commerce to exercise discretion in defining the phrase “frontier model.”[ref 4] A reviewing court would then ask whether the challenged agency interpretation fell within the agency’s discretion, rather than asking whether the interpretation was the best interpretation possible.
Conclusion
If the Supreme Court eliminates Chevron deference in the coming months, that decision will have profound implications for the regulatory capacity of executive-branch agencies generally and for AI governance specifically. However, there are concrete steps that can be taken to mitigate the impact of Chevron repeal on AI governance policy. Governance researchers and policymakers should not underestimate the potential significance of the end of Chevron and should take it into consideration while proposing legislative and regulatory strategies for AI governance.
Last edited on: August 23, 2024
AI Insight Forum – privacy and liability
Summary
On November 8, our Head of Strategy, Mackenzie Arnold, spoke before the US Senate’s bipartisan AI Insight Forum on Privacy and Liability, convened by Senate Majority Leader Chuck Schumer. We presented our perspective on how Congress can meet the unique challenges that AI presents to liability law.[ref 1]
In our statement, we note that:
- Liability is a critical tool for addressing risks posed by AI systems today and in the future, compensating victims, correcting market inefficiencies, and driving safety innovation.
- In some respects, existing law will function well by default.
- In others, artificial intelligence will present unusual challenges to liability law that may lead to inconsistency and uncertainty, penalize the wrong actors, and leave victims uncompensated.
- Courts, limited to the specific cases and facts at hand, may be slow to respond, creating a need for federal and state legislative action.
We then make several recommendations for how Congress could respond to these challenges:
- First, to prevent and deter harm from malicious and criminal misuse of AI, we recommend that Congress (1) hold developers and some deployers strictly liable for attacks on critical infrastructure and harms that result from biological, chemical, radiological, or nuclear weapons, (2) create strong incentives to secure and protect model weights, and (3) create duties to test for capabilities that could be misused and implement safeguards that cannot be easily removed.
- Second, to address unexpected capabilities and failure modes, we recommend that Congress (1) adjust obligations to account for whether a harm can be contained and remedied, (2) create a duty to test for emergent capabilities, including agentic behavior, (3) create duties to monitor, report, and respond to post-deployment harms, and (4) ensure that the law impute liability to a responsible human or corporate actor, where models act without human oversight.
- Third, to ensure that the costs to society are borne by those responsible and most able to prevent harm, we recommend that Congress (1) establish joint and several liability for harms involving AI systems, (2) consider some limitations on the ability of powerful developers to avoid responsibility through indemnification, and (3) clarify that AI systems are products subject to products liability law.
- Finally, to ensure that federal law does not obstruct the functioning of the liability system, we recommend that Congress (1) include a savings clause in any federal legislation to avoid preemption and (2) clarify that Section 230 does not apply to generative AI.
Dear Senate Majority Leader Schumer, Senators Rounds, Heinrich, and Young, and distinguished members of the U.S. Senate, thank you for the opportunity to speak with you about this important issue. Liability is a critical tool for addressing risks posed by AI systems today and in the future. In some respects, existing law will function well, compensating victims, correcting market inefficiencies, and driving safety innovation. However, artificial intelligence also presents unusual challenges to liability law that may lead to inconsistency and uncertainty, penalize the wrong actors, and leave victims uncompensated. Courts, limited to the specific cases and facts at hand, may be slow to respond. It is in this context that Congress has an opportunity to act.
Problem 1: Existing law will under-deter malicious and criminal misuse of AI.
Many have noted the potential for AI systems to increase the risk of various hostile threats, ranging from biological and chemical weapons to attacks on critical infrastructure like energy, elections, and water systems. AI’s unique contribution to these risks goes beyond simply identifying dangerous chemicals and pathogens; advanced systems may help plan, design, and execute complex research tasks or help criminals operate on a vastly greater scale. With this in mind, President Biden’s recent Executive Order has called upon federal agencies to evaluate and respond to systems that may “substantially lower[] the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons.” While large-scale malicious threats have yet to materialize, many AI systems are inherently dual-use by nature. If AI is capable of tremendous innovation, it may also be capable of tremendous, real-world harms. In many cases, the benefits of these systems will outweigh the risks, but the law can take steps to minimize misuse while preserving benefits.
Existing criminal, civil, and tort law will penalize malevolent actors for the harms they cause; however, liability is insufficient to deter those who know they are breaking the law. AI developers and some deployers will have the most control over whether powerful AI systems fall into the wrong hands, yet they may escape liability (or believe and act as if they will). Unfortunately, existing law may treat malevolent actors’ intentional bad acts or alterations to models as intervening causes that sever the causal chain and preclude liability, and the law leaves unclear what obligations companies have to secure their models. Victims will go uncompensated if their only source of recourse is small, hostile actors with limited funds. Reform is needed to make clear that those with the greatest ability to protect and compensate victims will be responsible for preventing malicious harms.
Recommendations
(1.1) Hold AI developers and some deployers strictly liable for attacks on critical infrastructure and harms that result from biological, chemical, radiological, or nuclear weapons.
The law has long recognized that certain harms are so egregious that those who create them should internalize their cost by default. Harms caused by biological, chemical, radiological, and nuclear weapons fit these criteria, as do harms caused by attacks on critical infrastructure. Congress has addressed similar harms before, for example, creating strict liability for releasing hazardous chemicals into the environment.
(1.2) Consider (a) holding developers strictly liable for harms caused by malicious use of exfiltrated systems and open-sourced weights or (b) creating a duty to ensure the security of model weights.
Access to model weights increases malicious actors’ ability to enhance dangerous capabilities and remove critical safeguards. And once model weights are out, companies cannot regain control or restrict malicious use. Despite this, existing information security norms are insufficient, as evidenced by the leak of Meta’s LLaMA model just one week after it was announced and significant efforts by China to steal intellectual property from key US tech companies. Congress should create strong incentives to secure and protect model weights.
Getting this balance right will be difficult. Open-sourcing is a major source of innovation, and even the most scrupulous information security practices will sometimes fail. Moreover, penalizing exfiltration without restricting the open-sourcing of weights may create perverse incentives to open-source weights in order to avoid liability—what has been published openly can’t be stolen. To address these tradeoffs, Congress could pair strict liability with the ability to apply for safe harbor or limit liability to only the largest developers, who have the resources to secure the most powerful systems, while excluding smaller and more decentralized open-source platforms. At the very least, Congress should create obligations for leading developers to maintain adequate security practices and empower a qualified agency to update these duties over time. Congress could also support open-source development through secure, subsidized platforms like NAIRR or investigate
other alternatives to safe access.
(1.3) Create duties to (a) identify and test for model capabilities that could be misused and (b) design and implement safeguards that consistently prevent misuse and cannot be easily removed.
Leading AI developers are best positioned to secure their models and identify dangerous misuse capabilities before they cause harm. The latter requires evaluation and red-teaming before deployment, as acknowledged in President Biden’s Recent Executive Order, and continued testing and updates after deployment. Congress should codify clear minimum standards for identifying capabilities and preventing misuse and should grant a qualified agency authority to update these duties over time.
Problem 2: Existing law will under-compensate harms from models with unexpected capabilities and failure modes.
A core characteristic of modern AI systems is their tendency to display rapid capability jumps and unexpected emergent behaviors. While many of these advances have been benign, when unexpected capabilities cause harm, courts may treat them as unforeseeable and decline to impose liability. Other failures may occur when AI systems are integrated into new contexts, such as healthcare, employment, and agriculture, where integration presents both great upside and novel risks. Developers of frontier systems and deployers introducing AI into novel contexts will be best positioned to develop containment methods and detect and correct harms that emerge.
Recommendations
(2.1) Adjust the timing of obligations to account for redressability.
To balance innovation and risk, liability law can create obligations at different stages of the product development cycle. For harms that are difficult to control or remedy after they have occurred, like harms that upset complex financial systems or that result from uncontrolled model behavior, Congress should impose greater ex-ante obligations that encourage the proactive identification of potential risks. For harms that are capable of containment and remedy, obligations should instead encourage rapid detection and remedy.
(2.2) Create a duty to test for emergent capabilities, including agentic behavior and its precursors.
Developers will be best positioned to identify new emergent behaviors, including agentic behavior. While today’s systems have not displayed such qualities, there are strong theoretical reasons to believe that autonomous capabilities may emerge in the future, as acknowledged by the actions of key AI developers like Anthropic and OpenAI. As techniques develop, Congress should ensure that those working on frontier systems utilize these tools rigorously and consistently. Here too, Congress should authorize a qualified agency to update these duties over time as new best practices emerge.
(2.3) Create duties to monitor, report, and respond to post-deployment harms, including taking down or fixing models that pose an ongoing risk.
If, as we expect, emergent capabilities are difficult to predict, it will be important to identify them even after deployment. In many cases, the only actors with sufficient information and technical insight to do so will be major developers of cutting-edge systems. Monitoring helps only insofar as it is accompanied by duties to report or respond. In at least some contexts, corporations already have a duty to report security breaches and respond to continuing risks of harm, but legal uncertainty limits the effectiveness of these obligations and puts safe actors at a competitive disadvantage. By clarifying these duties, Congress can ensure that all major developers meet a minimum threshold of safety.
(2.4) Create strict liability for harms that result from agentic model behavior such as self-exfiltration, self-alteration, self-proliferation, and self-directed goal-seeking.
Developers and deployers should maintain control over the systems they create. Behaviors that enable models to act on their own—without human oversight—should be disincentivized through liability for any resulting harms. “The model did it” is an untenable defense in a functioning liability system, and Congress should ensure that, where intent or personhood requirements would stand in the way, the law imputes liability to a responsible human or corporate actor.
Problem 3: Existing law may struggle to allocate costs efficiently.
The AI value chain is complex, often involving a number of different parties who help develop, train, integrate, and deploy systems. Because those later in the value chain are more proximate to the harms that occur, they may be the first to be brought to court. But these smaller, less-resourced actors will often have less ability to prevent harm. Disproportionately penalizing these actors will further concentrate power and diminish safety incentives for large, capable developers. Congress can ensure that responsibility lies with those most able to prevent harm.
Recommendations
(3.1) Establish joint and several liability for harms involving AI systems.
Victims will have limited information about who in the value chain is responsible for their injuries. Joint and several liability would allow victims to bring any responsible party to court for the full value of the injury. This would limit the burden on victims and allow better-resourced corporate actors to quickly and efficiently bargain toward a fair allocation of blame.
(3.2) Limit indemnification of liability by developers.
Existing law may allow wealthy developers to escape liability by contractually transferring blame to smaller third parties with neither the control to prevent nor assets to remedy harms. Because cutting-edge systems will be so desirable, a small number of powerful AI developers will have considerable leverage to extract concessions from third parties and users. Congress should limit indemnification clauses that help the wealthiest players avoid internalizing the costs of their products while still permitting them to voluntarily indemnify users.
(3.3) Clarify that AI systems are products under products liability law.
For over a decade, courts have refused to answer whether AI systems are software or products. This leaves critical ambiguity in existing law. The EU has proposed to resolve this uncertainty by declaring that AI systems are products. Though products liability is primarily developed through state law, a definitive federal answer to this question may spur quick resolution at the state level. Products liability has some notable advantages, focusing courts’ attention on the level of safety that is technically feasible, directly weighing risks and benefits, and applying liability across the value chain. Some have argued that this creates clearer incentives to proactively identify and invest in safer technology and limits temptations to go through the motions of adopting safety procedures without actually limiting risk. Products liability has its limitations, particularly in dealing with defects that emerge after deployment or alteration, but clarifying that AI systems are products is a good start.
Problem 4: Federal law may obstruct the functioning of liability law.
Parties are likely to argue that federal law preempts state tort and civil law and that Section 230 shields liability from generative AI models. Both would be unfortunate results that would prevent the redress of individual harms through state tort law and provide sweeping immunity to the very largest AI developers.
Recommendations
(4.1) Add a savings clause to any federal legislation to avoid preemption.
Congress regularly adds express statements that federal law does not eliminate, constrain, or preempt existing remedies under state law. Congress should do the same here. While federal law will provide much-needed ex-ante requirements, state liability law will serve a critical role in compensating victims and will be more responsive to harms that occur as AI develops by continuing to adjust obligations and standards of care.
(4.2) Clarify that Section 230 does not apply to generative AI.
The most sensible reading of Section 230 suggests that generative AI is a content creator. It creates novel and creative outputs rather than merely hosting existing information. But absent Congressional intervention, this ambiguity may persist. Congress should provide a clear answer: Section 230 does not apply to generative AI.
Defining “frontier AI”
What are legislative and administrative definitions?
Congress usually defines key terms like “Frontier AI” in legislation to establish the scope of agency authorization. The agency then implements the law through regulations that more precisely set forth what is regulated, in terms sufficiently concrete to give notice to those subject to the regulation. In doing so, the agency may provide administrative definitions of key terms and provide specific examples or mechanisms.
Who can update these definitions?
Congress can amend legislation and might do so to supersede regulatory or judicial interpretations of the legislation. The agency can amend regulations to update its own definitions and implementation of the legislative definition.
Congress can also expressly authorize an agency to further define a term. For example, the Federal Insecticide, Fungicide, and Rodenticide Act defines “pest” to include any organism “the Administrator declares to be a pest” pursuant to 7 U.S.C. § 136.
What is the process for updating administrative definitions?
For a definition to be legally binding, by default an agency must follow the rulemaking process in the Administrative Procedure Act (APA). Typically, this requires that the agency go through specific notice-and-comment proceedings (informal rulemaking).
Congress can change the procedures an agency must follow to make rules, for example by dictating the frequency of updates or by authorizing interim final rulemaking, which permits the agency to accept comments after the rule is issued instead of before.
Can a technical standard be incorporated by reference into regulations and statutes?
Yes, but incorporation by reference in regulations is limited. The agency must specify what version of the standard is being incorporated, and regulations cannot dynamically update with a standard. Incorporation by reference in federal regulations is also subject to other requirements. When Congress codifies a standard in a statute, it may incorporate future versions directly, as it did in the Federal Food, Drug, and Cosmetic Act, defining “drug” with reference to the United States Pharmacopoeia. 21 U.S.C. § 321(g). Congress can instead require that an agency use a particular standard. For example, the U.S. Consumer Product Safety Improvement Act effectively adopted ASTM International Standards on toy safety as consumer product safety standards and required the Consumer Product Safety Commission to incorporate future revisions into consumer product safety rules. 15 U.S.C. § 2056b(a) & (g).
How frequently could the definition be updated?
By default the rulemaking process is time-consuming. While the length of time needed to issue a rule varies, estimates from several agencies range from 6 months to over 4 years; the internal estimate of the average for the Food and Drug Administration (FDA) is 3.5 years and for the Department of Transportation is 1.5 years. Less significant updates, such as minor changes to a definition or list of regulated models, might take less time. However, legislation could impliedly or expressly allow updates to be made in a shorter time frame than permitted by the APA.
An agency may bypass some or all of the notice-and-comment process “for good cause” if to do otherwise would be “impracticable, unnecessary, or contrary to the public interest,” 5 U.S.C. § 553(b)(3)(B), such as in the interest of an emergent national security issue or to prevent widespread disruption of flights. It may also bypass the process if the time required would harm the public or subvert the underlying statutory scheme, such as when an agency relied on the exemption for decades to issue weekly rules on volume restrictions for agricultural commodities because it could not reasonably “predict market and weather conditions more than a month in advance” as the 30-day advance notice would require (Riverbend Farms, 9th Cir. 1992).
Congress can also implicitly or explicitly waive the APA requirements. While mere existence of a statutory deadline is not sufficient, a stringent deadline that makes compliance impractical might constitute good cause.
What existing regulatory regimes may offer some guidance?
- The Federal Select Agents Program (FSAP) regulates biological agents that threaten public health, maintains a database of such agents, and inspects entities using such agents. FSAP also works with the FBI to evaluate entity-specific security risks. Finally, FSAP investigates incidents of non-compliance. FSAP provides a model for regulating technology as well as labs. The Program has some drawbacks worthy of study, including risks of regulatory capture (entity investigations are often not done by an independent examiner), prioritization issues (high-risk activities are often not prioritized), and resource allocation (entity investigations are often slow and tedious).
- The FDA approves generic drugs by comparing their similarity in composition and risk to existing, approved drugs. Generic drug manufacturers attempt to show sufficient similarity to an approved drug so as to warrant a less rigorous review by the FDA. This framework has parallels with a relative, comparative definition of Frontier AI.
What are the potential legal challenges?
- Under the major questions doctrine, courts will not accept an agency interpretation of a statute that grants the agency authority over a matter of great “economic or political significance” unless there is a “clear congressional authorization” for the claimed authority. Defining “frontier AI” in certain regulatory contexts could plausibly qualify as a “major question.” Thus, an agency definition of “Frontier AI” could be challenged under the major questions doctrine if issued without congressional authorization.
- The regulation could face a non-delegation doctrine challenge, which limits congressional delegation of its legislative power. The doctrine requires Congress to include an “intelligible principle” on how to exercise its delegated authority. In practice, this is a lenient standard; however, some commentators believe that the Supreme Court may strengthen the doctrine in the near future. Legislation that provides more specific guidance regarding policy decisions is less problematic from a nondelegation perspective than legislation that confers a great deal of discretion on the agency and provides little or no guidance on how the agency should exercise it.