What might the end of Chevron deference mean for AI governance?

In January of this year, the Supreme Court heard oral argument in two cases—Relentless, Inc. v. Department of Commerce and Loper Bright Enterprises, Inc. v. Raimondo—that will decide the fate of a longstanding legal doctrine known as “Chevron deference.” During the argument, Justice Elena Kagan spoke at some length about her concern that eliminating Chevron deference would impact the U.S. federal government’s ability to “capture the opportunities, but also meet the challenges” presented by advances in Artificial Intelligence (AI) technology.

Eliminating Chevron deference would dramatically impact the ability of federal agencies to regulate in a number of important areas, from health care to immigration to environmental protection. But Justice Kagan chose to focus on AI for a reason. In addition to being a hot topic in government at the moment—more than 80 items of AI-related legislation have been proposed in the current Session of the U.S. Congress—AI governance could prove to be an area where the end of Chevron deference will be particularly impactful.

The Supreme Court will issue a decision in Relentless and Loper Bright at some point before the end of June 2024. Most commentators expect the Court’s conservative majority to eliminate (or at least to significantly weaken) Chevron deference, notwithstanding the objections of Justice Kagan and the other two members of the Court’s liberal minority. But despite the potential significance of this change, relatively little has been written about what it means for the future of AI governance. Accordingly, this blog post offers a brief overview of what Chevron deference is and what its elimination might mean for AI governance efforts.

What is Chevron deference?

Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc. is a 1984 Supreme Court case  in which the Court laid out a framework for evaluating agency regulations interpreting federal statutes (i.e., laws). Under Chevron, federal courts defer to agency interpretations when: (1) the relevant part of the statute being interpreted is genuinely ambiguous, and (2) the agency’s interpretation is reasonable. 

As an example of how this deference works in practice, consider the case National Electrical Manufacturers Association v. Department of Energy. There, a trade association of electronics manufacturers (NEMA) challenged a Department of Energy (DOE) regulation that imposed energy conservation standards on electric induction motors with power outputs between 0.25 and 3 horsepower. The DOE claimed that this regulation was authorized by a statute that empowered the DOE to create energy conservation standards for “small electric motors.” NEMA argued that motors with between 1 and 3 horsepower were too powerful to be “small electric motors” and that the DOE was therefore exceeding its statutory authority by attempting to regulate them. A federal court considered the language of the statute and concluded that the statute was ambiguous as to whether 1-3 horsepower motors could be “small electric motors.” The court also found that the DOE’s interpretation of the statute was reasonable. Therefore, the court deferred to the DOE’s interpretation under Chevron and the challenged regulation was upheld.

What effect would overturning Chevron have on AI governance efforts?

Consider the electric motor case discussed above. In a world without Chevron deference, the question considered by the court would have been “does the best interpretation of the statute allow DOE to regulate 1-3 horsepower motors?” rather than “is the DOE’s interpretation of this statute reasonable?” Under the new standard, lawsuits like NEMA’s would probably be more likely to succeed than they have been in recent decades under Chevron.

Eliminating Chevron would essentially take some amount of interpretive authority away from federal agencies and transfer it to federal courts. This would make it easier for litigants to successfully challenge agency actions, and could also have a chilling effect on agencies’ willingness to adopt potentially controversial interpretations. Simply put, no Chevron means fewer and less aggressive regulations. To libertarian-minded observers like Justice Neil Gorsuch, who has been strongly critical of the modern administrative state, this would be a welcome change—less regulation would mean smaller government, increased economic growth, and more individual freedom.1 Those who favor a laissez-faire approach to AI governance, therefore, should welcome the end of Chevron. Many commentators, however, have suggested that a robust federal regulatory response is necessary to safely develop advanced AI systems without creating unacceptable risks. Those who subscribe to this view would probably share Justice Kagan’s concern that degrading the federal government’s regulatory capacity will seriously impede AI governance efforts.

Furthermore, AI governance may be more susceptible to the potential negative effects of Chevron repeal than other areas of regulation. Under current law, the degree of deference accorded to agency interpretations “is particularly great where … the issues involve a high level of technical expertise in an area of rapidly changing technological and competitive circumstances.”2 This is because the regulation of emerging technologies is an area where two of the most important policy justifications for Chevron deference are at their most salient. Agencies, according to Chevron’s proponents, are (a) better than judges at marshaling deep subject matter expertise and hands-on experience, and (b) better than Congress at responding quickly and flexibly to changed circumstances. These considerations are particularly important for AI governance because AI is, in some ways, particularly poorly understood and unusually prone to manifesting unexpected capabilities and behaving in unexpected ways even in comparison to other emerging technologies.

Overturning Chevron would also make it more difficult for agencies to regulate AI under existing authorities by issuing new rules based on old statutes. The Federal Trade Commission, for example, does not necessarily need additional authorization to issue regulations intended to protect consumers from harms such as deceptive advertising using AI. It already has some authority to issue such regulations under § 5 of the FTC Act, which authorizes the FTC to issue regulations aimed at preventing “unfair or deceptive acts or practices in or affecting commerce.” But disputes will inevitably arise, as they often have in the past, over the exact meaning of statutory language like “unfair or deceptive acts or practices” and “in or affecting commerce.” This is especially likely to happen when old statutes (the “unfair or deceptive acts or practices” language in the FTC Act dates from 1938) are leveraged to regulate technologies that could not possibly have been foreseen when the statutes were drafted. Statutes that predate the technologies to which they are applied will necessarily be full of gaps and ambiguities, and in the past Chevron deference has allowed agencies to regulate more or less effectively by filling in those gaps. If Chevron is overturned, challenges to this kind of regulation will be more likely to succeed. For instance, the anticipated legal challenge to the Biden administration’s use of the Defense Production Act to authorize reporting requirements for AI labs developing dual-use foundation models might possibly be more likely to succeed if Chevron is overturned.3

If Chevron is overturned, agency interpretations will still be entitled to a weaker form of deference known as Skidmore deference, after the 1944 Supreme Court case Skidmore v. Swift & Co. Skidmore requires courts give respectful consideration to an agency’s interpretation, taking into account the agency’s expertise and knowledge of the policy context surrounding the statute. But Skidmore deference is not really deference at all; agency interpretations under Skidmore influence a court’s decision only to the extent that they are persuasive. In other words, replacing Chevron with Skidmore would require courts only to consider the agency’s interpretation along with other arguments and authorities raised by the parties to a lawsuit in the course of choosing the best interpretation of a statute. 

How can legislators respond to the elimination of Chevron?

Chevron deference was not originally created by Congress—rather, it was created by the Supreme Court in 1984. This means that Congress could probably4 codify Chevron into law, if the political will to do so existed. However, past attempts to codify Chevron have mostly failed, and the difficulty of enacting controversial new legislation in the current era of partisan gridlock makes codifying Chevron an unlikely prospect in the short term. 

However, codifying Chevron as a universal principle of judicial interpretation is not the only option. Congress can alternatively codify Chevron on a narrower basis, by including, in individual laws for which Chevron deference would be particularly useful,  provisions directing courts to defer to specified agencies’ reasonable interpretations of specified statutory provisions. This approach could address Justice Kagan’s concerns about the desirability of flexible rulemaking in highly technical and rapidly evolving regulatory areas while also making concessions to conservative concerns about the constitutional legitimacy of the modern administrative state. 

While codifying Chevron could be controversial, there are also some uncontroversial steps that legislators can take to shore up new legislation against post-Chevron legal challenges. Conservative and liberal jurists agree that statutes can legitimately confer discretion on agencies to choose between different available policy options. So, returning to the small electric motor example discussed above, a statute that explicitly granted the DOE broad discretion to define “small electric motor” in accordance with the DOE’s policy judgment about what motors should be regulated would effectively confer discretion. The same would be true for, e.g., a law authorizing the Department of Commerce to exercise discretion in defining the phrase “frontier model.”5 A reviewing court would then ask whether the challenged agency interpretation fell within the agency’s discretion, rather than asking whether the interpretation was the best interpretation possible.

Conclusion

If the Supreme Court eliminates Chevron deference in the coming months, that decision will have profound implications for the regulatory capacity of executive-branch agencies generally and for AI governance specifically. However, there are concrete steps that can be taken to mitigate the impact of Chevron repeal on AI governance policy.  Governance researchers and policymakers should not underestimate the potential significance of the end of Chevron and should take it into consideration while proposing legislative and regulatory strategies for AI governance.

AI Insight Forum – privacy and liability

Summary

On November 8, our Head of Strategy, Mackenzie Arnold, spoke before the US Senate’s bipartisan AI Insight Forum on Privacy and Liability, convened by Senate Majority Leader Chuck Schumer. We presented our perspective on how Congress can meet the unique challenges that AI presents to liability law.1

In our statement, we note that:

We then make several recommendations for how Congress could respond to these challenges:


Dear Senate Majority Leader Schumer, Senators Rounds, Heinrich, and Young, and distinguished members of the U.S. Senate, thank you for the opportunity to speak with you about this important issue. Liability is a critical tool for addressing risks posed by AI systems today and in the future. In some respects, existing law will function well, compensating victims, correcting market inefficiencies, and driving safety innovation. However, artificial intelligence also presents unusual challenges to liability law that may lead to inconsistency and uncertainty, penalize the wrong actors, and leave victims uncompensated. Courts, limited to the specific cases and facts at hand, may be slow to respond. It is in this context that Congress has an opportunity to act. 

Problem 1: Existing law will under-deter malicious and criminal misuse of AI. 

Many have noted the potential for AI systems to increase the risk of various hostile threats, ranging from biological and chemical weapons to attacks on critical infrastructure like energy, elections, and water systems. AI’s unique contribution to these risks goes beyond simply identifying dangerous chemicals and pathogens; advanced systems may help plan, design, and execute complex research tasks or help criminals operate on a vastly greater scale. With this in mind, President Biden’s recent Executive Order has called upon federal agencies to evaluate and respond to systems that may “substantially lower[] the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons.” While large-scale malicious threats have yet to materialize, many AI systems are inherently dual-use by nature. If AI is capable of tremendous innovation, it may also be capable of tremendous, real-world harms. In many cases, the benefits of these systems will outweigh the risks, but the law can take steps to minimize misuse while preserving benefits. 

Existing criminal, civil, and tort law will penalize malevolent actors for the harms they cause; however, liability is insufficient to deter those who know they are breaking the law. AI developers and some deployers will have the most control over whether powerful AI systems fall into the wrong hands, yet they may escape liability (or believe and act as if they will). Unfortunately, existing law may treat malevolent actors’ intentional bad acts or alterations to models as intervening causes that sever the causal chain and preclude liability, and the law leaves unclear what obligations companies have to secure their models. Victims will go uncompensated if their only source of recourse is small, hostile actors with limited funds. Reform is needed to make clear that those with the greatest ability to protect and compensate victims will be responsible for preventing malicious harms. 

Recommendations

(1.1) Hold AI developers and some deployers strictly liable for attacks on critical infrastructure and harms that result from biological, chemical, radiological, or nuclear weapons.

The law has long recognized that certain harms are so egregious that those who create them should internalize their cost by default. Harms caused by biological, chemical, radiological, and nuclear weapons fit these criteria, as do harms caused by attacks on critical infrastructure. Congress has addressed similar harms before, for example, creating strict liability for releasing hazardous chemicals into the environment. 

(1.2) Consider (a) holding developers strictly liable for harms caused by malicious use of exfiltrated systems and open-sourced weights or (b) creating a duty to ensure the security of model weights.

Access to model weights increases malicious actors’ ability to enhance dangerous capabilities and remove critical safeguards. And once model weights are out, companies cannot regain control or restrict malicious use. Despite this, existing information security norms are insufficient, as evidenced by the leak of Meta’s LLaMA model just one week after it was announced and significant efforts by China to steal intellectual property from key US tech companies. Congress should create strong incentives to secure and protect model weights. 

Getting this balance right will be difficult. Open-sourcing is a major source of innovation, and even the most scrupulous information security practices will sometimes fail. Moreover, penalizing exfiltration without restricting the open-sourcing of weights may create perverse incentives to open-source weights in order to avoid liability—what has been published openly can’t be stolen. To address these tradeoffs, Congress could pair strict liability with the ability to apply for safe harbor or limit liability to only the largest developers, who have the resources to secure the most powerful systems, while excluding smaller and more decentralized open-source platforms. At the very least, Congress should create obligations for leading developers to maintain adequate security practices and empower a qualified agency to update these duties over time. Congress could also support open-source development through secure, subsidized platforms like NAIRR or investigate
other alternatives to safe access.

(1.3) Create duties to (a) identify and test for model capabilities that could be misused and (b) design and implement safeguards that consistently prevent misuse and cannot be easily removed. 

Leading AI developers are best positioned to secure their models and identify dangerous misuse capabilities before they cause harm. The latter requires evaluation and red-teaming before deployment, as acknowledged in President Biden’s Recent Executive Order, and continued testing and updates after deployment. Congress should codify clear minimum standards for identifying capabilities and preventing misuse and should grant a qualified agency authority to update these duties over time. 

Problem 2: Existing law will under-compensate harms from models with unexpected capabilities and failure modes. 

A core characteristic of modern AI systems is their tendency to display rapid capability jumps and unexpected emergent behaviors. While many of these advances have been benign, when unexpected capabilities cause harm, courts may treat them as unforeseeable and decline to impose liability. Other failures may occur when AI systems are integrated into new contexts, such as healthcare, employment, and agriculture, where integration presents both great upside and novel risks. Developers of frontier systems and deployers introducing AI into novel contexts will be best positioned to develop containment methods and detect and correct harms that emerge.

Recommendations

(2.1) Adjust the timing of obligations to account for redressability. 

To balance innovation and risk, liability law can create obligations at different stages of the product development cycle. For harms that are difficult to control or remedy after they have occurred, like harms that upset complex financial systems or that result from uncontrolled model behavior, Congress should impose greater ex-ante obligations that encourage the proactive identification of potential risks. For harms that are capable of containment and remedy, obligations should instead encourage rapid detection and remedy. 

(2.2) Create a duty to test for emergent capabilities, including agentic behavior and its precursors. 

Developers will be best positioned to identify new emergent behaviors, including agentic behavior. While today’s systems have not displayed such qualities, there are strong theoretical reasons to believe that autonomous capabilities may emerge in the future, as acknowledged by the actions of key AI developers like Anthropic and OpenAI. As techniques develop, Congress should ensure that those working on frontier systems utilize these tools rigorously and consistently. Here too, Congress should authorize a qualified agency to update these duties over time as new best practices emerge.

(2.3) Create duties to monitor, report, and respond to post-deployment harms, including taking down or fixing models that pose an ongoing risk. 

If, as we expect, emergent capabilities are difficult to predict, it will be important to identify them even after deployment. In many cases, the only actors with sufficient information and technical insight to do so will be major developers of cutting-edge systems. Monitoring helps only insofar as it is accompanied by duties to report or respond. In at least some contexts, corporations already have a duty to report security breaches and respond to continuing risks of harm, but legal uncertainty limits the effectiveness of these obligations and puts safe actors at a competitive disadvantage. By clarifying these duties, Congress can ensure that all major developers meet a minimum threshold of safety. 

(2.4) Create strict liability for harms that result from agentic model behavior such as self-exfiltration, self-alteration, self-proliferation, and self-directed goal-seeking. 

Developers and deployers should maintain control over the systems they create. Behaviors that enable models to act on their own—without human oversight—should be disincentivized through liability for any resulting harms. “The model did it” is an untenable defense in a functioning liability system, and Congress should ensure that, where intent or personhood requirements would stand in the way, the law imputes liability to a responsible human or corporate actor.

Problem 3: Existing law may struggle to allocate costs efficiently. 

The AI value chain is complex, often involving a number of different parties who help develop, train, integrate, and deploy systems. Because those later in the value chain are more proximate to the harms that occur, they may be the first to be brought to court. But these smaller, less-resourced actors will often have less ability to prevent harm. Disproportionately penalizing these actors will further concentrate power and diminish safety incentives for large, capable developers. Congress can ensure that responsibility lies with those most able to prevent harm. 

Recommendations

(3.1) Establish joint and several liability for harms involving AI systems. 

Victims will have limited information about who in the value chain is responsible for their injuries. Joint and several liability would allow victims to bring any responsible party to court for the full value of the injury. This would limit the burden on victims and allow better-resourced corporate actors to quickly and efficiently bargain toward a fair allocation of blame. 

(3.2) Limit indemnification of liability by developers. 

Existing law may allow wealthy developers to escape liability by contractually transferring blame to smaller third parties with neither the control to prevent nor assets to remedy harms. Because cutting-edge systems will be so desirable, a small number of powerful AI developers will have considerable leverage to extract concessions from third parties and users. Congress should limit indemnification clauses that help the wealthiest players avoid internalizing the costs of their products while still permitting them to voluntarily indemnify users

(3.3) Clarify that AI systems are products under products liability law. 

For over a decade, courts have refused to answer whether AI systems are software or products. This leaves critical ambiguity in existing law. The EU has proposed to resolve this uncertainty by declaring that AI systems are products. Though products liability is primarily developed through state law, a definitive federal answer to this question may spur quick resolution at the state level. Products liability has some notable advantages, focusing courts’ attention on the level of safety that is technically feasible, directly weighing risks and benefits, and applying liability across the value chain. Some have argued that this creates clearer incentives to proactively identify and invest in safer technology and limits temptations to go through the motions of adopting safety procedures without actually limiting risk. Products liability has its limitations, particularly in dealing with defects that emerge after deployment or alteration, but clarifying that AI systems are products is a good start. 

Problem 4: Federal law may obstruct the functioning of liability law. 

Parties are likely to argue that federal law preempts state tort and civil law and that Section 230 shields liability from generative AI models. Both would be unfortunate results that would prevent the redress of individual harms through state tort law and provide sweeping immunity to the very largest AI developers. 

Recommendations

(4.1) Add a savings clause to any federal legislation to avoid preemption. 

Congress regularly adds express statements that federal law does not eliminate, constrain, or preempt existing remedies under state law. Congress should do the same here. While federal law will provide much-needed ex-ante requirements, state liability law will serve a critical role in compensating victims and will be more responsive to harms that occur as AI develops by continuing to adjust obligations and standards of care. 

(4.2) Clarify that Section 230 does not apply to generative AI. 

The most sensible reading of Section 230 suggests that generative AI is a content creator. It creates novel and creative outputs rather than merely hosting existing information. But absent Congressional intervention, this ambiguity may persist. Congress should provide a clear answer: Section 230 does not apply to generative AI.

Defining “frontier AI”

What are legislative and administrative definitions?

Congress usually defines key terms like “Frontier AI” in legislation to establish the scope of agency authorization. The agency then implements the law through regulations that more precisely set forth what is regulated, in terms sufficiently concrete to give notice to those subject to the regulation. In doing so, the agency may provide administrative definitions of key terms and provide specific examples or mechanisms.

Who can update these definitions?

Congress can amend legislation and might do so to supersede regulatory or judicial interpretations of the legislation. The agency can amend regulations to update its own definitions and implementation of the legislative definition.

Congress can also expressly authorize an agency to further define a term. For example, the Federal Insecticide, Fungicide, and Rodenticide Act defines “pest” to include any organism “the Administrator declares to be a pest” pursuant to 7 U.S.C. § 136.

What is the process for updating administrative definitions?

For a definition to be legally binding, by default an agency must follow the rulemaking process in the Administrative Procedure Act (APA). Typically, this requires that the agency go through specific notice-and-comment proceedings (informal rulemaking). 

Congress can change the procedures an agency must follow to make rules, for example by dictating the frequency of updates or by authorizing interim final rulemaking, which permits the agency to accept comments after the rule is issued instead of before.

Can a technical standard be incorporated by reference into regulations and statutes?

Yes, but incorporation by reference in regulations is limited. The agency must specify what version of the standard is being incorporated, and regulations cannot dynamically update with a standard. Incorporation by reference in federal regulations is also subject to other requirements. When Congress codifies a standard in a statute, it may incorporate future versions directly, as it did in the Federal Food, Drug, and Cosmetic Act, defining “drug” with reference to the United States Pharmacopoeia. 21 U.S.C. § 321(g). Congress can instead require that an agency use a particular standard. For example, the U.S. Consumer Product Safety Improvement Act effectively adopted ASTM International Standards on toy safety as consumer product safety standards and required the Consumer Product Safety Commission to incorporate future revisions into consumer product safety rules. 15 U.S.C. § 2056b(a) & (g).

How frequently could the definition be updated?

By default the rulemaking process is time-consuming. While the length of time needed to issue a rule varies, estimates from several agencies range from 6 months to over 4 years; the internal estimate of the average for the Food and Drug Administration (FDA) is 3.5 years and for the Department of Transportation is 1.5 years. Less significant updates, such as minor changes to a definition or list of regulated models, might take less time. However, legislation could impliedly or expressly allow updates to be made in a shorter time frame than permitted by the APA.

An agency may bypass some or all of the notice-and-comment process “for good cause” if to do otherwise would be “impracticable, unnecessary, or contrary to the public interest,” 5 U.S.C. § 553(b)(3)(B), such as in the interest of an emergent national security issue or to prevent widespread disruption of flights. It may also bypass the process if the time required would harm the public or subvert the underlying statutory scheme, such as when an agency relied on the exemption for decades to issue weekly rules on volume restrictions for agricultural commodities because it could not reasonably “predict market and weather conditions more than a month in advance” as the 30-day advance notice would require (Riverbend Farms, 9th Cir. 1992).

Congress can also implicitly or explicitly waive the APA requirements. While mere existence of a statutory deadline is not sufficient, a stringent deadline that makes compliance impractical might constitute good cause. 

What existing regulatory regimes may offer some guidance?

  1. The Federal Select Agents Program (FSAP) regulates biological agents that threaten public health, maintains a database of such agents, and inspects entities using such agents. FSAP also works with the FBI to evaluate entity-specific security risks. Finally, FSAP investigates incidents of non-compliance. FSAP provides a model for regulating technology as well as labs. The Program has some drawbacks worthy of study, including risks of regulatory capture (entity investigations are often not done by an independent examiner), prioritization issues (high-risk activities are often not prioritized), and resource allocation (entity investigations are often slow and tedious).
  2. The FDA approves generic drugs by comparing their similarity in composition and risk to existing, approved drugs. Generic drug manufacturers attempt to show sufficient similarity to an approved drug so as to warrant a less rigorous review by the FDA. This framework has parallels with a relative, comparative definition of Frontier AI.

What are the potential legal challenges?

  1. Under the major questions doctrine, courts will not accept an agency interpretation of a statute that grants the agency authority over a matter of great “economic or political significance” unless there is a “clear congressional authorization” for the claimed authority. Defining “frontier AI” in certain regulatory contexts could plausibly qualify as a “major question.” Thus, an agency definition of “Frontier AI” could be challenged under the major questions doctrine if issued without congressional authorization.
  2. The regulation could face a non-delegation doctrine challenge, which limits congressional delegation of its legislative power. The doctrine requires Congress to include an “intelligible principle” on how to exercise its delegated authority. In practice, this is a lenient standard; however, some commentators believe that the Supreme Court may strengthen the doctrine in the near future. Legislation that provides more specific guidance regarding policy decisions is less problematic from a nondelegation perspective than legislation that confers a great deal of discretion on the agency and provides little or no guidance on how the agency should exercise it.