Why give AI agents actual legal duties?

The core proposition of Law-Following AI (LFAI) is that AI agents should be designed to refuse to take illegal actions in the service of their principals. However, as Ketan and I explain in our writeup of LFAI for Lawfare, this raises a significant legal problem: 

[A]s the law stands, it is unclear how an AI could violate the law. The law, as it exists today, imposes duties on persons. AI agents are not persons, and we do not argue that they should be. So to say “AIs should follow the law” is, at present, a bit like saying “cows should follow the law” or “rocks should follow the law”: It’s an empty statement because there are at present no applicable laws for them to follow.

Let’s call this the Law-Grounding Problem for LFAI. LFAI requires defining AI actions as either legal or illegal. The problem arises because courts generally cannot reason about the legality of actions taken by an actor without some sort of legally recognized status, and AI systems currently lack any such status.[ref 1]

In the LFAI article, we propose solving the Law-Grounding Problem by making AI agents “legal actors”: entities on which the law actually imposes legal duties, even if they have no legal rights. This is explained and defended more fully in Part II of the article. Let’s call this the Actual Approach to the Law-Grounding Problem.[ref 2] Under the Actual Approach, claims like “that AI violated the Sherman Act” are just as true within our legal system as claims like “Jane Doe violated the Sherman Act.”

There is, however, another possible approach that we did not address fully in the article: saying that an AI agent has violated the law if it took an action that, if taken by a human, would have violated the law.[ref 3] Let’s call this the Fictive Approach to the Law-Grounding Problem. Under the Fictive Approach, claims like “that AI violated the Sherman Act” would not be true in the same way that statements like “Jane Doe violated the Sherman Act.” Instead, statements like “that AI violated the Sherman Act” would be, at best, a convenient shorthand for statements like “that AI took an action that, if taken by a human, would have violated the Sherman Act.”

I will argue that the Actual Approach is preferable to the Fictive Approach in some cases.[ref 4] Before that, however, I will explain why someone might be attracted to the Fictive Approach in the first place.

Motivating the Fictive Approach

To say that something is fictive is not to say that it is useless; legal fictions are common and useful. The Fictive Approach to the Law-Grounding Problem has several attractive features.

The first is its ease of implementation: the Fictive Approach does not require any fundamental rethinking of legal ontology. We do not need to either grant AI agents legal personhood or create a new legal category for them.

The Fictive Approach might also track common language use: when people make statements like “Claude committed copyright infringement,” they probably mean it in the fictive sense. 

Finally, the Fictive Approach also mirrors how we think about similar problems, like immunity doctrines. The King of England may be immune from prosecution, but we can nevertheless speak intelligibly of his actions as lawful or unlawful by analyzing what the legal consequences would be if he were not immune.

Why prefer the Actual Approach?

Nevertheless, I think there are good reasons to prefer the Actual Approach over the Fictive Approach.

Analogizing to Humans Might Be Difficult

The strongest reason, in my opinion, is that AI agents may “think” and “act” very differently from humans. The Fictive Approach requires us to take a string of actions that an AI did and ask whether a human who performed the same actions would have acted illegally. The problem is that AI agents can take actions that could be very hard for humans to take, and so judges and jurors might struggle to analyze the legal consequences of a human doing the same thing. 

Today’s proto-agents are somewhat humanlike in that they receive instructions in natural language, use computer tools designed for humans, reason in natural language, and generally take actions serially at approximately human pace and scale. But we should not expect this paradigm to last. For example, AI agents might soon:

And these are just some of the most foreseeable; over time, AI agents will likely become increasingly alien in their modes of reasoning and action. If so, then the Fictive Approach will become increasingly strained: judges and jurors will find themselves trying to determine whether actions that no human could have taken would have violated the law if performed by a human. At a minimum, this would require unusually good analogical reasoning skills; more likely, the coherence of the reasoning task would break down entirely.

Developing Tailored Laws and Doctrines for AIs

LFAI is motivated in large part by the belief that AI agents that are aligned to “a broad suite of existing laws”[ref 5] would be much safer than AI agents unbound by existing laws. But new laws specifically governing the behavior of AI agents will likely be necessary as AI agents transform society.[ref 6] However, the Fictive Approach would not be effective for new AI-specific laws. Recall that the Fictive Approach says that an action by an AI agent violates a law just in the case that a human who took that action would have violated that law. But if the law in question would only apply to an AI agent, the Fictive Approach cannot be applied: a human could not violate the law in question. 

Relatedly, we may wish to develop new AI-specific legal doctrines, even for laws that apply to both humans and AIs. For example, we might wish to develop new doctrines for applying existing laws with a mental state component to AI agents.[ref 7] Alternatively, we may need to develop doctrines for determining when multiple instances of the same (or similar) AI models should be treated as identical actors. But the Fictive Approach is in tension with the development of AI-specific doctrines, since the whole point of the Fictive Approach is precisely to avoid reasoning about AI systems in their own right.

These conceptual tensions may be surmountable. But as a practical matter, a legal ontology that enables courts and legislatures to actually reason about AI systems in their own right seems more likely to lead to nuanced doctrines and laws that are responsive to the actual nature of AI systems. The Fictive Approach, by contrast, encourages courts and legislatures to attempt to map AI actions onto human actions, which may thereby overlook or minimize the significant differences between humans and AI systems.

Grounding Respondeat Superior Liability

Some scholars propose using respondeat superior to impose liability on the human principals of AI agents for any “torts” committed by the latter.[ref 8] However, “[r]espondeat superior liability applies only when the employee has committed a tort. Accordingly, to apply respondeat superior to the principals of an AI agent, we need to be able to say that the behavior of the agent was tortious.”[ref 9] We can only say that the behavior of an AI agent was truly tortious if it had a legal duty to violate. The Actual Approach allows for this; the Fictive Approach does not.

Of course, another option is simply to use the Fictive Approach for the application of respondeat superior liability as well. However, the Actual Approach seems preferable insofar as it doesn’t require this additional change. More generally, precisely because the Actual Approach integrates AI systems into the legal system more fully, it can be leveraged to parsimoniously solve problems in areas of law beyond LFAI.

In the LFAI article, we take no position as to whether AI agents should be given legal personhood: a bundle of duties and rights.[ref 10] However, there may be good reasons to grant AI agents some set of legal rights.[ref 11] 

Treating AI agents as legal actors under the Actual Approach creates optionality with respect to legal personhood: if the law recognizes an entity’s existence and imposes duties on it, it is easier for the law to subsequently grant that entity rights (and therefore personhood). But, we argue, the Actual Approach creates no obligation to do so:[ref 12] the law can coherently say that an entity has duties but no rights. Since it is unclear whether it is desirable to give AIs rights, this optionality is desirable. 

*      *      *

AI companies[ref 13] and policymakers[ref 14] are already tempted to impose legal duties on AI systems. To make serious policy progress towards this, they will need to decide whether to actually do so, or merely use “lawbreaking AIs” as shorthand for some strained analogy to lawbreaking humans. Choosing the former path—the Actual Approach—is simpler and more adaptable, and therefore preferable. 

Protecting AI whistleblowers

In May 2024, OpenAI found itself at the center of a national controversy when news broke that the AI lab was pressuring departing employees to sign contracts with extremely broad nondisparagement and nondisclosure provisions—or else lose their vested equity in the company. This would essentially have required former employees to avoid criticizing OpenAI for the indefinite future, even on the basis of publicly known facts and nonconfidential information.

Although OpenAI quickly apologized and promised not to enforce the provisions in question, the damage had already been done—a few weeks later, a number of current and former OpenAI and Google DeepMind employees signed an open letter calling for a “right to warn” about serious risks posed by AI systems, noting that “[o]rdinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”

The controversy over OpenAI’s restrictive exit paperwork helped convince a number of industry employees, commentators, and lawmakers of the need for new legislation to fill in gaps in existing law and protect AI industry whistleblowers from retaliation. This culminated recently in the AI Whistleblower Protection Act (AI WPA), a bipartisan bill introduced by Sen. Chuck Grassley (R-Iowa) along with a group of three Republican and three Democratic senators. Companion legislation was introduced in the house by Reps. Ted Lieu (D-Calif.) and Jay Obernolte (R-Calif.).

Whistleblower protections such as the AI WPA are minimally burdensome, easy to implement and enforce, and plausibly useful for facilitating government access to the information needed to mitigate AI risks. They also have genuine bipartisan appeal, meaning there is actually some possibility of enacting them. As increasingly capable AI systems continue to be developed and adopted, it is essential that those most knowledgeable about any dangers posed by these systems be allowed to speak freely.

Why Whistleblower Protections?

The normative case for whistleblower protections is simple: Employers shouldn’t be allowed to retaliate against employees for disclosing information about corporate wrongdoing. The policy argument is equally straightforward—company employees often witness wrongdoing well before the public or government becomes aware but can be discouraged from coming forward by fear of retaliation. Prohibiting retaliation is an efficient way of incentivizing whistleblowers to come forward and a strong social signal that whistleblowing is valued by governments (and thus worth the personal cost to whistleblowers).

There is also reason to believe that whistleblower protections could be particularly valuable in the AI governance context. Information is the lifeblood of good governance, and it’s unrealistic to expect government agencies and the legal system to keep up with the rapid pace of progress in AI development. Often, the only people with the information and expertise necessary to identify the risks that a given model poses will be the people who helped create it.

Of course, there are other ways for governments to gather information on emerging risks. Prerelease safety evaluationsthird-party auditsbasic registration and information-sharing requirements, and adverse event reporting are all tools that help governments develop a sharper picture of emerging risks. But these tools have mostly not been implemented in the U.S. on a mandatory basis, and there is little chance they will be in the near future.

Furthermore, whistleblower disclosures are a valuable source of information even in thoroughly regulated and relatively well-understood contexts like securities trading. In fact, the Securities and Exchange Commission has awarded more than $2.2 billion to more than 444 whistleblowers since its highly successful whistleblower program began in 2012. We therefore expect AI whistleblowers to be a key source of information no matter how sophisticated the government’s other information-gathering authorities (which, currently, are almost nonexistent) become.

Whistleblower protections are also minimally burdensome. A bill like the AI WPA imposes no affirmative obligations on affected companies. It doesn’t prevent them from going to market or integrating models into useful products. It doesn’t require them to jump through procedural hoops or prescribe rigid safety practices. The only thing necessary for compliance is to refrain from retaliating against employees or former employees who lawfully disclose important information about wrongdoing to the government. It seems highly unlikely that this kind of common-sense restriction could ever significantly hinder innovation in the AI industry. This may explain why even innovation-focused, libertarian-minded commentators like Martin Casado of Andreesen Horowitz and Dean Ball have reacted favorably to AI whistleblower bills like California SB 53, which would prohibit retaliation against whistleblowers who disclose information about “critical risks” from frontier AI systems. It’s worth noting that the sponsor of the AI WPA’s House companion bill was introduced by Rep. Obernolte, who has been the driving force behind the controversial AI preemption provision in the GOP reconciliation bill.

The AI Whistleblower Protection Act

Beyond the virtues of whistleblower protections generally, how does the actual whistleblower bill currently making its way through Congress stack up?

In our opinion, favorably. A few weeks ago, we published a piece on how to design AI whistleblower legislation. The AI WPA checks almost all of the boxes we identified, as discussed below.

Dangers to Public Safety

First, and most important, the AI WPA fills a significant gap in existing law by protecting disclosures about “dangers” to public safety even if the whistleblower can’t point to any law violation by their employer. Specifically, the law protects disclosures related to a company’s failure to appropriately respond to “substantial and specific danger[s]” to “public safety, public health, or national security” posed by AI, or about “security vulnerabilit[ies]” that could allow foreign countries or other bad actors to steal model weights or algorithmic secrets from an AI company. This is significant because the most important existing protection for whistleblowers at frontier AI companies—California’s state whistleblower statute—only protects disclosures about law violations.

It’s important to protect disclosures about serious dangers even when no law has been violated because the law, with respect to emerging technologies like AI, often lags far behind technological progress. When the peer-to-peer file sharing service Napster was founded in 1999, it wasn’t immediately clear whether its practices were illegal. By the time court decisions resolved the ambiguity, a host of new sites using slightly different technology had sprung up and were initially determined to be legal before the Supreme Court stepped in and reversed the relevant lower court decisions in 2005. In a poorly understood, rapidly changing, and almost totally unregulated area like AI development, the prospect of risks arising from behavior that isn’t clearly prohibited by any existing law is all too plausible.

Consider a hypothetical: An AI company trains a new cutting-edge model that beats out its competitors’ latest offerings on a wide variety of benchmarks, redefining the state of the art for the nth time in as many months. But this time, a routine internal safety evaluation reveals that the new model can, with a bit of jailbreaking, be convinced to plan and execute a variety of cyberattacks that the evaluators believe would be devastatingly effective if carried out, causing tens of millions of dollars in damage and crippling critical infrastructure. The company, under intense pressure to release a model that can compete with the newest releases from other major labs, implements safeguards that employees believe can be easily circumvented but otherwise ignores the danger and misrepresents the results of its safety testing in public statements.

In the above hypothetical, is the company’s behavior unlawful? An enterprising prosecutor might be able to make charges stick in the aftermath of a disaster, because the U.S. has some very broad criminal laws that can be creatively interpreted to prohibit a wide variety of behaviors. But the illegality of the company’s behavior is at the very least highly uncertain.

Now, suppose that an employee with knowledge of the safety testing results reported those results in confidence to an appropriate government agency. Common sense dictates that the company shouldn’t be allowed to fire or otherwise punish the employee for such a public-spirited act, but under currently existing law it is doubtful whether the whistleblower would have any legal recourse if terminated. Knowing this, they might well be discouraged from coming forward in the first place. This is why establishing strong, clear protections for AI employees who disclose information about serious threats to public safety is important. This kind of protection is also far from unprecedented—currently, federal employees enjoy a similar protection for disclosures about “substantial and specific” dangers, and there are also sector-specific protections for certain categories of private-sector employees such as (for example) railroad workers who report “hazardous safety or security conditions.”

Importantly, the need to protect whistleblowers has to be weighed against the legitimate interest that AI companies have in safeguarding valuable trade secrets and other confidential business information. A whistleblower law that is too broad in scope might allow disgruntled employees to steal from their former employers with impunity and hand over important technical secrets to competitors. The AI WPA, however, sensibly limits its danger-reporting protection to disclosures made to appropriate government officials or internally at a company regarding “substantial and specific danger[s]” to “public safety, public health, or national security.” This means that, for better or worse, reporting about fears of highly speculative future harms will probably not be protected, nor will disclosures to the media or watchdog groups.

Preventing Contractual Waivers of Whistleblower Rights

Another key provision states that contractual waivers of the whistleblower rights created by the AI WPA are unenforceable. This is important because nondisclosure and nondisparagement agreements are common in the tech industry, and are often so broadly worded that they purport to prohibit an employee or former employee from making the kinds of disclosures that the AI WPA is intended to protect. It was this sort of broad nondisclosure agreement (NDA) that first sparked widespread public interest in AI whistleblower protections during the 2024 controversy over OpenAI’s exit paperwork.

OpenAI’s promise to avoid enforcing the most controversial parts of its NDAs did not change the underlying legal reality that allowed OpenAI to propose the NDAs in the first place, and that would allow any other frontier AI company to propose similarly broad contractual restrictions in the future. As we noted in a previous piece on this subject, there is some chance that attempts to enforce such restrictions against genuine whistleblowers would be unsuccessful, because of either state common law or existing state whistleblower protections. Even so, the threat of being sued for violating an NDA could discourage potential whistleblowers even if such a lawsuit might not eventually succeed. A clear federal statutory indication that such contracts are unenforceable would therefore be a welcome development. The AI WPA, which clearly resolves the NDA issue by providing that “[t]he rights and remedies provided for in this section may not be waived or altered by any contract, agreement, policy form, or condition of employment,” would provide exactly this.

Looking Forward

It’s not clear what will happen to the AI Whistleblower Protection Act. It appears as likely to pass as any AI measure we’ve seen, given the substantial bipartisan enthusiasm behind it and the lack of any substantial pushback from industry to date. But it is difficult in general to pass federal legislation, and the fact that there has been very little in the way of vocal opposition to this bill to date doesn’t mean that dissenting voices won’t make themselves heard in the coming weeks.

Regardless of what happens to this specific bill, those who care about governing AI well should continue to support efforts to pass something like the AI WPA. However concerned or unconcerned one may be about the dangers posed by AI, the bill as a whole serves a socially valuable purpose: establishing a uniform whistleblower protection regime for reports about security vulnerabilities and lawbreaking in a critically important industry.

Ten Highlights of the White House’s AI Action Plan

Today, the White House released its AI Action Plan, laying out the administration’s priorities for AI innovation, infrastructure, and adoption. Ultimately, the value of the Plan will depend on how it is operationalized via executive orders and the actions of executive branch agencies, but the Plan itself contains a number of promising policy recommendations. We’re particularly excited about:  

  1. The section on federal government evaluations of national security risks in frontier models. This section correctly identifies the possibility that “the most powerful AI systems may pose novel national security risks in the near future,” potentially including risks from cyberattacks and risks related to the development of chemical, biological, radiological, nuclear, or explosive (CBRNE) weapons. Ensuring that the federal government has the personnel, expertise, and authorities needed to guard against these risks should be a bipartisan priority. 
  2. The discussion of interpretability and control, which recognizes the importance of interpretability to the use of advanced AI systems in national security and defense applications. The Plan also recommends three policy actions for advancing the science of interpretability, each of which seems useful for frontier AI security in expectation.
  3. The overall focus on standard-setting by the Center for AI Standards and Innovation (CAISI, formerly known as the AI Safety Institute) and other government agencies, in partnership with industry, academia, and civil society organizations.
  4. The recommendation on building an AI evaluations ecosystem. The science of evaluating AI systems’ capabilities is still in its infancy, but the Plan identifies a few promising ways for CAISI and other government agencies to support the development of this critical field.
  5. The emphasis on physical and cybersecurity for frontier labs and bolstering critical infrastructure cybersecurity. As Leopold Aschenbrenner pointed out in “Situational Awareness,” AI labs are not currently equipped to protect their model weights and algorithmic secrets from being stolen by China or other geopolitical rivals of the U.S., and fixing this problem is a crucial national security imperative. 
  6. The call to improve the government’s capacity for AI incident response. Advanced planning and capacity-building are crucial for ensuring that the government is prepared to respond in the event of an AI emergency. Incident response preparation is an effective way to increase resiliency without directly burdening innovation.
  7. The section on how the legal system should handle deceptive AI-generated “evidence.” Legal rules often lag behind technological development, and the guidance contemplated here could be highly useful to courts that might otherwise be unprepared to handle an influx of unprecedentedly convincing fake evidence. 
  8. The recommendations for ramping up export control enforcement and plugging loopholes in existing semiconductor export controls. Compute governance—preventing geopolitical rivals from gaining access to the chips needed to train cutting-edge frontier AI models—continues to be an effective policy tool for maintaining the U.S.’s lead in the race to develop advanced AI systems before China. 
  9. The suggested regulatory sandboxes, which could enable AI adoption and increase the AI governance capacity of sectoral regulatory agencies like the FDA and the SEC.
  10. The section on deregulation wisely rejects the maximalist position of the moratorium that was stripped from the recent reconciliation bill by a 99-1 Senate vote. Instead of proposing overbroad and premature preemption of virtually all state AI regulations, the Plan recommends that AI-related federal funding should not “be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.”
    • At the moment, it’s hard to identify any significant source of “AI-related federal funding” to states, although this could change in the future. This being the case, it will likely be difficult for the federal government to offer states any significant inducement towards deregulation unless it first offers them new federal money. And disincentivizing truly “burdensome” state regulations that would interfere with the effectiveness of federal grants seems like a sensible alternative to broader forms of preemption.
    • The Plan also seems to suggest that the FCC could preempt some state AI regulations under § 253 of the Communications Act of 1934. It remains to be seen whether and to what extent this kind of preemption is legally possible. At first glance, however, it seems unlikely that the FCC’s authority to regulate telecommunications services could legally be used for any especially broad preemption of state AI laws. Any broad FCC preemption under this authority would likely have to go through notice and comment procedures and might struggle to overcome legal challenges from affected states.

Christoph Winter’s remarks to the European Parliament on AI Agents and Democracy

Summary

On July 17th, LawAI’s Director and Founder, Christoph Winter, was invited to speak before the European Parliament’s Special Committee on the European Democracy Shield with participation of IMCO and LIBE Committee members. Professor Winter was asked to present on AI governance, regulation and democratic safeguards. He spoke about the democratic challenges that AI agents may present and how democracies could approach these challenges. 

Two recommendations were made to the Committee:

Transcript

Distinguished Members of Parliament, fellow speakers and experts,

Manipulating public opinion at scale used to require vast resources. This situation is changing quickly. During Slovakia’s 2023 election a simple deepfake audio recording of a candidate discussing vote-buying schemes circulated just 48 hours before polls opened, which was too late for fact-checking, but not too late to reach thousands of voters. And deepfakes are really just the beginning.

AI agents, which are autonomous systems that can act on the internet like skilled human workers, are being developed by all major AI companies. And soon they could be able to simultaneously orchestrate large-scale manipulation campaigns, hack electoral systems, and coordinate cyber-attacks on fact-checkers—all while operating 24/7 at unprecedented scale.

Today, I want to propose two solutions to these democratic challenges. First, requiring AI agents to be Law-following by design. And second, strengthening the AI Office’s capacity to understand and address AI risks. Let me explain each.

Law-following AI requires AI systems to be architecturally constrained to refuse actions that would be illegal if performed by humans in the same position. Just as AIs are currently trained to decline to help build bombs, they would reject orders to violate constitutional rights or election laws.

Law-following AI is democratically compelling for three reasons: First, it is democratically legitimate. Laws represent our collective will, refined through democratic deliberation, rather than unilaterally determined corporate values. Second, it enables democratic adaptability. Laws can be changed through democratic processes, and AI agents designed to follow law can automatically adjust their behavior. Third, it offers a democratic shield—because without these constraints, we risk creating AI agents that blindly follow orders, and history has shown where blind obedience leads.

In practice, this would mean that AI agents bound by law would refuse orders to suppress political speech, manipulate elections, blackmail officials, or harass dissidents. This way, law-following AI could prevent authoritarian actors from using obedient AI agents to entrench their power. Of course, it can’t prevent all forms of manipulation—much harmful persuasion operates within legal bounds. But blocking AI agents from illegal attacks on democracy is a critical first step.

The EU’s Code of Practice on General-Purpose AI already recognizes this danger and identifies “lawlessness” as a model propensity that contributes to systemic risk. But just as we currently lack reliable methods to assess how persuasive AI systems are, we currently lack a way to reliably measure AI lawlessness.

And perhaps most concerningly—and this brings me to my second proposal—the AI Office currently lacks the institutional capacity to develop these crucial capabilities.

The AI Office needs sufficient technical, policy, and legal staff to rigorously analyze what companies submit under the Code of Practice and AI Act—to scrutinize their risk assessments, verify their mitigation measures, and spot gaps in their safety evaluations. In other words: When a company claims their AI agent is law-following, the AI Office must have the expertise and resources to independently test that claim. When developers report on persuasion capabilities—capabilities that even they may not fully understand—the AI Office needs experts who can identify what’s missing from those reports.

Rigorous evaluation isn’t just about compliance—it’s about how we learn: each assessment and each gap we identify builds our understanding of these systems. This is why adequate AI Office capacity matters: not just for evaluating persuasion capabilities or Law-following AI today, but for understanding and preparing for risks to democracy that grow with each model release.

To illustrate what the current resource gap looks like: Recent reports suggest Meta offered one AI researcher a salary package of  €190 million. The AI Office—tasked with overseeing the entire industry—operates on less.

This gap between private power and public capacity is unsustainable for our democracy. If we’re serious about democracy, we must fund our institutions accordingly.

So to protect democracy, we can start with two things: AI agents bound by human laws, and an AI Office with the capacity to understand and evaluate the risks.

Thank you.

The full video can be watched here (starts 12:01:02).

Future frontiers for research in law and AI

LawAI’s Legal Frontiers team aims to incubate new law and policy proposals that are simultaneously:

  1. Anticipatory, in that they respond to a reasonable forecast of the legal and policy challenges that further advances in AI will produce
  2. Actionable, in that we can make progress within these workstreams even under significant uncertainty
  3. Accommodating to a wide variety of worldviews and technological trajectories, given the shared challenges that AI will create and the uncertainties we have about likely developments
  4. Ambitious, in that they both significantly reduce some of the largest risks from AI while also enabling society to reap its benefits

Currently, the Legal Frontiers team owns two workstreams:

  1. AI Agents and the Rule of Law
  2. International Regulatory Institutions

However, the general vision behind Legal Frontiers is to continuously spin out mature workstreams to free us to identify and incubate new ones. To that end, we recently updated our LawAI’s Workstreams and Research Directions document to list some “Future Frontiers” on which we might work in the future. 

However, we don’t want people to wait for us to start working on these questions: they are already ripe for scholarly attention. To that end, we have reproduced those Future Frontiers here.

Regulating Government-Developed Frontier AI

Today, governments primarily act as a consumer of frontier AI technologies. Frontier AI systems are primarily developed by private companies with little or no initial government involvement. Those companies may then tailor their general frontier AI offerings to meet the particular needs of governmental customers.[ref 1] However, the private sector is generally responsible for the primary development of frontier AI models and systems, with governmental steering entering, if at all, later in the commercialization lifecycle. 

However, as governments increasingly realize the significant strategic implications of frontier AI technologies, they may wish to become more directly involved in the development of frontier AI systems at earlier stages of the development cycle.[ref 2] This could range from frontier AI systems initially developed under government contract, to a fully governmental effort to develop next-generation frontier AI systems.[ref 3] Indeed, a 2024 report from the U.S.-China Economic and Security Review Commission called for Congress to “establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability.”[ref 4] 

Existing proposals for the regulation of the development and deployment of frontier AI systems envision the imposition of such regulations on private businesses, under the implicit assumption that frontier AI development and deployment will remain private-led. If and when governments do take a larger role in the development of frontier AI systems, new regulatory paradigms will be needed. Such proposals need to identify and address unique challenges and opportunities that government-led AI development will pose, as compared to today’s private-led efforts.

Examples of possible questions in this workstream could include:

  1. How are safety and security risks in high-stakes governmental research projects (e.g., the Manhattan Project) usually regulated?
  2. How might the government steer development of frontier AI technologies if it wished to do so?
  3. What existing checks and balances would apply to a  government program to develop frontier AI technologies?
  4. How would ideal regulation of government-directed frontier AI development vary depending on the mechanism used for such direction (e.g., contract versus government-run development)?
  5. How might ideal regulation of government-directed frontier AI development vary depending on whether the development is led by military or civilian parts of the government?
  6. If necessary to procure key inputs for the program (e.g. compute), how could the US government collaborate with select allies on such programs?[ref 5]

Accelerating Technologies that Defend against Risks from AI

It is likely infeasible[ref 6] and/or undesirable[ref 7] to fully prevent the wide proliferation of many high-risk AI systems. There is therefore increasing interest in developing technologies[ref 8] to defend against possible harms from diffuse AI systems, and remedy those harms where defensive measures fail.[ref 9] Collectively, we call these “defensive technologies.”[ref 10] 

Many of the most valuable contributions to the development and deployment of defensive technologies will not come from legal scholars, but rather from some combination of entrepreneurship, technological development research and development, and funders. But legal change may also play a role in more directly accelerating the development and deployment of defensive technologies, such as by removing barriers to their adoption, which raise the costs of research or reduce its rewards.[ref 11]

Examples of general questions that might be valuable to explore include:

  1. What are examples of existing policies that unnecessarily hinder research and development into defense-enhancing technologies, such as by (a) raising the costs of conducting that research, or (b) reducing the expected profits of deployment of defense-enhancing technologies?[ref 12] 
  2. What are existing legal or policy barriers that inhibit effective diffusion of defensive technologies across society?[ref 13]
  3. How can the law preferentially[ref 14] accelerate defensive technologies?  

Regulating Internal Deployment

Many existing AI policy proposals regulate AI systems at the point when they are first “deployed”: that is, made available for use by persons external to the developer. However, pre-deployment use of AI models by the developing company—“internal deployment”—may also pose substantial risks.[ref 15] However, most policy proposals aimed at reducing large-scale risks from AI primarily regulate AI at or after the point of external deployment. Policy proposals for regulating internal deployment would therefore be valuable.

Example questions in the workstream might include:

  1. What existing modes of regulation in other AI industries are most analogous to regulation of internal deployment?[ref 16]
  2. How can the state identify which AI developers are appropriate targets for regulation of internal deployment?
  3. How can regulation of internal deployment simultaneously reduce risk and allow for appropriate exploration of model capabilities and risks?
  4. What are the constitutional (e.g., Fourth Amendment) limitations on regulation of internal deployment?
  5. How can regulation of internal deployment be designed to reduce risks of espionage and information leakage?

AI technologies performing legal tasks will likely surface loopholes or gaps in the law: that is, actions permitted by the law but which policymakers would likely prefer to be prohibited. There are several reasons to expect this:

  1. AI itself constitutes a significant technological change, and technological changes often surface loopholes or gaps in the law.[ref 17]
  2. AI might accelerate technological change and economic growth,[ref 18] which will similarly often surface gaps or loopholes in the law.
  3. AI might be more efficient at finding gaps or loopholes in the law, and quickly exploiting them.

Given that lawmaking is a slow and deliberative process, actors can often exploit gaps or loopholes before policymakers can “patch” them. While this dynamic is not new, AI systems may be able to cause more harm or instability by finding or exploiting gaps and loopholes than humans have in the past, due to their greater speed of action, ability to coordinate, dangerous capabilities, and (possibly) lack of internal moral constraints.

This suggests that it may be very valuable for policymakers to “patch” legal gaps and loopholes by quickly enacting new laws. However, constitutional governance is often intentionally slow, deliberative, and decentralized, suggesting that it is unwise and sometimes illegal to accelerate lawmaking in certain ways.

This tension suggests that it would be valuable to research how new legislative and administrative procedures could quickly “patch” legal gaps and loopholes through new law while also complying with the letter and spirit of constitutional limitations on lawmaking.

Responsibly Advancing AI-Enabled Governance

Recent years have seen robust governmental interest in the use of AI technologies for administration and governance.[ref 19] As systems advance in capabilities, this may create significant risks of both misuse,[ref 20] as well as potential safety risks from the deployment of advanced systems in high-stakes governmental infrastructures.

A recent report[ref 21] identifies the dual imperative for governments to:

  1. Quickly adopt AI technology to enhance state capacity, but
  2. Take care when doing so.

The report lays out three types of interventions worth considering:

  1. “‘Win-win’ opportunities” that help with both adoption and safety;[ref 22]
  2. “Risk-reducing interventions”; and
  3. “Adoption-accelerating interventions.”

Designing concrete policies in each of these categories is very valuable, especially policies in the first category, or policies in the second and third category that do not come at the expense of the other category.

As AI systems are able to complete more of the tasks typically associated with traditional legal functions—drafting legislation and regulation, adjudicating,[ref 23] litigating, drafting contracts, counseling clients, negotiating, investigating possible violations of law, generating legal research—it will be natural to consider whether and how these tasks should be automated. 

We can call AI systems performing such functions “AI lawyers.” If implemented well, AI lawyers could help with many of the challenges that AI could bring. AI lawyers could write new laws to regulate governmental development or use of frontier AI, monitor governmental uses of AI, and craft remedies for violations. AI lawyers could also identify gaps and loopholes in the law, accelerate negotiations between lawmakers, and draft legislative “patches” that reflect lawmakers’ consensus. 

However, entrusting ever more power to AI lawyers entails significant risks. If AI lawyers are not themselves law-following, they may abuse their governmental station to the detriment of citizens. If such systems are not intent-aligned,[ref 24] entrusting AI systems with significant governmental power may make it easier for those systems to erode humanity’s control over human affairs. Regardless of whether AI lawyers are aligned, delegating too many legal functions to AI lawyers may frustrate important rule-of-law values, such as democratic responsiveness, intelligibility, and predictability. Furthermore, there are likely certain legal functions that it is important for natural persons to perform, such as serving as a judge on the court of last resort.

Research into the following questions may help humanity navigate the promises and perils of AI lawyers:

  1. Which legal functions should never be automated?
  2. Which legal functions, if entrusted to an AI lawyer, would significantly threaten democratic and rule-of-law values?
  3. How can AI lawyers enhance human autonomy and rule-of-law values?
  4. How can AI lawyers enhance the ability of human governments to respond to challenges from AI?
  5. What substantive safety standards should AI lawyers have to satisfy before being deployed in the human legal system?
  6. Which new legal checks and balances should be introduced if AI lawyers accelerate the speed of legal processes? 

Related to the above, there is also a question of how we can accelerate potential technologies that would defend against general risks to the rule of law and/or democratic accountability. For instance, as lawyers, we may also be particularly well-placed to advance legal reforms that make it easier for citizens to leverage “AI lawyers” to help them defend against vexatious litigation and governmental oppression, or pursue meritorious claims.[ref 25] For example, existing laws regulating the practice of law may impose barriers on citizens’ ability to leverage AI for their legal needs.[ref 26] This suggests further questions, such as:

  1. Who will benefit by default from the widespread availability of cheap AI lawyers?
  2. Will laws regulating the practice of law form a significant barrier to defensive (and other beneficial) applications of AI lawyers?
  3. How should laws regulating the practice of law accommodate the possibility of AI lawyers, especially those that are “defensive” in some sense?
  4. How might access to cheap AI lawyers affect the volume of litigation and pursuit of claims? If there is a significant increase, would this result in a counterproductive effect by slowing down court processing times or prompting the judicial system to embrace technological shortcuts? 

Approval Regulation in a Decentralized World

After the release of GPT-4, a number of authors and policymakers proposed compute-indexed approval regulation, under which frontier AI systems trained with large amounts would be subjected to heightened predeployment scrutiny.[ref 27] Such regulation was perceived as attractive in large part because, under the scaling paradigm that produced GPT-4, development of frontier AI systems depended on the use of a small number of large data centers, which could (in theory) be easily monitored.

However, subsequent technological developments that reduce the amount of centralized compute needed to achieve frontier AI capabilities (namely improvements in decentralized training[ref 28] and the rise of reasoning models)[ref 29] have cast serious doubts on the long-term viability of compute-indexed approval regulation as a method for preventing unapproved development of highly capable AI models.[ref 30] 

It is not clear, however, that these developments mean that other forms of approval regulation for frontier AI development and deployment would be totally ineffective. Many activities are subject to reasonably effective approval regulation notwithstanding their highly distributed nature. For example, people generally respect laws requiring a license to drive a car, hunt, or practice law, even though these activities are very difficult for the government to reliably prevent ex ante. Further research into approval regulation for more decentralized activities could therefore help illuminate whether approval regulation for frontier AI development could remain viable, at an acceptable cost to other values (e.g., privacy, liberty), notwithstanding these developments in the computational landscape.

Examples of possible questions in this workstream could include:

  1. How effective are existing approval regulation regimes for decentralized activities?
  2. Which decentralized activities most resemble frontier AI development under the current computing paradigm?
  3. How do governments create effective approval regulation regimes for decentralized activities, and how might those mechanisms be applied to decentralized frontier AI development?
  4. How can approval regulation of decentralized frontier AI development be implemented at acceptable costs to other values (e.g., privacy, liberty, administrative efficiency)?

Two Byrd Rule problems with the AI moratorium

Note: this commentary was drafted on June 26, 2025, as a memo not intended for publication; we’ve elected to publish it in case the analysis laid out here is useful to policymakers or commentators following ongoing legislative developments regarding the proposed federal moratorium on state AI regulation. The issues noted here are relevant to the latest version of the bill as of 2:50 p.m. ET on June 30, 2025.

Two Byrd Rule issues have emerged, both of which should be fixed. It appears that the Parliamentarian has not ruled on either.

Effects on existing BEAD funding

The Parliamentarian may have already identified the first Byrd Rule issue: the plain text of the AI Moratorium would affect all $42.45 Billion in BEAD funding, not just the newly allocated $500 Million. It is not 100% certain that a court would read the statute this way, but it is the most likely outcome. We analyzed this problem in a recently published commentary. This issue could be fixed via an amendment. 

Private enforcement of the moratorium

In that same article, we flagged a second issue that also presents a Byrd Rule issue: the AI Moratorium seemingly creates private enforcement rights in private parties. That’s a problem under the Byrd Rule, because the AI Moratorium must be a “necessary term or condition” of an outlay. A private enforcement right cannot be characterized as a necessary term or condition of an outlay that does not concern those third parties. This can be fixed by clarifying that the only enforcement mechanism is withdrawal or denial of the new BEAD funding.

The text at issue – private enforcement of the moratorium

The plain text of the moratorium, and applicable legal precedents, likely empower private parties to enforce the moratorium in court. Stripping the provision down to its essentials, subsection (q) states that “no eligible entity or political subdivision thereof . . . may enforce . . . any law or regulation . . . limiting, restricting or otherwise regulating artificial intelligence models, [etc.].” That sounds like prohibition. It doesn’t mention the Department of Commerce. Nor does it leave it to the Secretary’s discretion whether that prohibition applies. If states satisfy the criteria, they likely are prohibited from enforcing AI laws.

Nothing in the proposed moratorium or in 47 U.S.C. § 1702 generally provides that the only remedy for a violation of the moratorium is deobligation of obligated funds by the Assistant Secretary of Commerce for Communications and Information. And when comparable laws—e.g. the Airline Deregulation Act, 49 U.S.C. § 41713—have used similar language to expressly preempt state AI laws, courts have interpreted this as authorizing private parties to sue for an injunction preventing enforcement of preempted state laws. See, for example, Morales v. Trans World Airlines, Inc., 504 U.S. 374 (1992).

What would happen – private lawsuits to enforce the moratorium

Private parties could vindicate this right in one of two ways. First, if a private party (e.g. an AI company) fears that a state will imminently sue it for violating that state’s AI law, the private party could seek a declaratory judgment in federal court. Second, if the state actually sues the private party, that party could raise the moratorium as a defense to that lawsuit. If the private party is based in the same state, that defense would be heard in state court, and could result in dismissal of the state’s claims; if the party is from out-of-state, the claim would be removed to federal court, where a judge could also throw out the state’s claims. 

Why it’s a Byrd Rule problem – private rights are not “terms or conditions”

The AI Moratorium must be a “necessary term or condition” of an outlay. In this case, promising not to enforce AI laws is a valid “term or condition” of the grant. Passively opening oneself up to lawsuits and defenses by private parties is not. Those lawsuits occur far after states take the money, are outside their control, and involve the actions of individuals who are not parties to the grant agreement. They also have significant effects unrelated to spending: binding the actions of states and invalidating laws in ways completely separate from the underlying transaction between the Department of Commerce and the states. It is perfectly compatible with the definition of “terms and conditions” for the Department of Commerce to deobligate funds if the terms of its grant are violated. It is an entirely different thing to create a defense or cause of action for third parties and to allow those parties to interfere with the enforcement power of states. The creation of rights for a third party, uninvolved in the delivery or receipt of an outlay cannot be considered a necessary term or condition.

The AI moratorium—the Blackburn amendment and new requirements for “generally applicable” laws

Published: 9:55 pm ET on June 29, 2025

Last updated: 10:28 pm ET on June 29, 2025

The latest version of the AI moratorium has been released, with some changes to the “rule of construction.” We’ve published two prior commentaries on the moratorium (both of which are still relevant, because the updated text has not addressed the issues noted in either). The new text:

  1. Shortens the “temporary pause” from 10 to 5 years;
  2. Attempts to exempt laws addressing CSAM, childrens’ online safety, and rights to name/likeness/voice/image—although the amendment seemingly fails to protect the laws its drafters intend to exempt; and
  3. Creates a new requirement that laws do not create an “undue or disproportionate burden,” which is likely to generate significant litigation.

The amendment tries to protect state laws on child sexual abuse materials and recording artists, but likely fails to do so. 

The latest text appears to be drafted specifically to address the concerns of Senator Marsha Blackburn, who does not want the moratorium to apply to state laws affecting recording artists (like Tennessee’s ELVIS Act) and laws affecting child sexual abuse material (CSAM). But while the amended text lists each of these categories of laws as specific examples of “generally applicable” laws or regulations, the new text only exempts those laws if they do not impose an “undue or disproportionate burden” on AI models, systems, or “algorithmic decision systems,” as defined in the moratorium, in order to “reasonably effectuate the broader underlying purposes of the law or regulation.”

However, laws like the ELVIS Act likely have a disproportionate burden on AI systems. They almost exclusively target AI systems and outputs, and the effect of the law will almost entirely be borne by AI companies. While trailing qualifiers always vex courts, the fact that “undue or disproportionate burden” is separated from the preceding list by a comma strongly suggests that it qualifies the entire list and not just “common law.” Common sense also counsels in favor of this reading: it’s unlikely that an inherently general body of law (like common law) would place a disproportionate burden on AI, while legislation like the ELVIS act absolutely could (and likely does). As we read the new text, the most likely outcome is that the laws Senator Blackburn wants to protect would not be protected.

Even if other readings are possible, this “disproportionate” language would almost certainly create litigation if enacted, with companies challenging whether the ELVIS Act and CSAM laws are actually exempted. As we have previously noted, the moratorium will likely be privately enforceable—meaning that any company or individual against whom a state attempts to enforce a state law or regulation will be able to sue to prevent enforcement.

The newly added “undue or disproportionate burden” language creates an unclear standard (and will likely generate extensive litigation)

The problem discussed above extends beyond the specific laws that Senator Blackburn wishes to protect. Previously, “generally applicable” laws were exempted. Under the new language, laws that address AI models/systems or “automated decision systems” can be exempted, but only if they do not place an “undue or disproportionate burden” on said models/systems. The effect of the new “undue or disproportionate burden” language will likely be to generate additional litigation and uncertainty. It may also make it more likely that some generally applicable laws, such as facial recognition laws or data protection laws, will no longer be exempt because they may place a disproportionate burden on AI models/systems.

Other less significant changes

Previously, subsection (q)(2)(A)(ii) excepted any law or regulation “the primary purpose and effect of which is to… streamline licensing, permitting, routing, zoning, procurement, or reporting procedures in a manner that facilitates the adoption of [AI models/systems/automated decision systems].” As amended, the relevant provision now excepts any law or regulation “the primary purpose and effect of which is to… streamline licensing, permitting, routing, zoning, procurement, or reporting procedures related to the adoption or deployment of [AI models/systems/automated decision systems].” This amended language is slightly broader than the original, but the difference does not seem highly significant. 

Additionally, the structure of the paragraphs has been adjusted slightly, likely to make clear that subparagraph (B) (which requires that any fee or bond imposed by any excepted law be reasonable and cost-based and treat AI models/systems in the same manner as other models/systems) modifies both the “generally applicable law” and “primary purpose and effect” prongs of the rule of construction rather than just one or the other.

Other issues remain

As we’ve discussed previously, our best read of the text suggests that two additional issues remain unaddressed:

The AI moratorium—more deobligation issues

Earlier this week, LawAI published a brief commentary discussing how to interpret the provisions in the proposed federal moratorium on state laws regulating AI relating to deobligation of Broadband Equity, Access, and Deployment (BEAD) funds. Since that publication, the text of the proposed moratorium has been updated, apparently in order to comply with a request from the Senate parliamentarian. Given the importance of this issue, and the existence of some amount of confusion around what exactly the changes to the moratorium’s text do, we’ve decided to publish a sequel to that earlier commentary briefly explaining how this new version of the bill will impact existing BEAD funding. 

Does the latest version of the moratorium affect existing BEAD funding or only the new $500 million?

The moratorium would still, potentially, affect both existing and newly appropriated BEAD funding. 

Essentially, there are two tranches of money at issue here: $500 million in new BEAD funding that the reconciliation bill would appropriate, and the $42.45 billion in existing BEAD funding that has already been obligated to states (but none of which has actually been spent as of the writing of this commentary). The previous version of the moratorium, as we noted in our earlier commentary, contained a deobligation provision that would have allowed deobligation (i.e., clawing-back) of a state’s entire portion of the $42.45 billion tranch as well as the same state’s portion of the new $500 million tranch.

The new version of the moratorium would update that deobligation provision by adding the words “if obligated any funds made available under subsection (b)(5)(A)” to the beginning of 47 U.S.C. § 1702(g)(3)(B)(iii). The provision now reads, in relevant part, “The Assistant Secretary… may, in addition to other authority under applicable law, deobligate grant funds awarded to an eligible entity that… if obligated any funds made available under subsection (b)(5)(A), is not in compliance with [the AI moratorium].” 

In other words, the update clarifies that only states that accept a portion of the new $500 million in BEAD funding can have their BEAD funding clawed back if they attempt to enforce state laws regulating AI. But it does not change the fact that any state that does accept a portion of the $500 million, and then violates the moratorium (intentionally or otherwise), is subject to having all of its BEAD funding clawed back—including its portion of the $42.45 billion tranch of existing BEAD funding. Paragraph (3) covers “deobligation of awards” generally, and the phrase “grant funds awarded to an eligible entity” clearly means all grant funds awarded to that entity, rather than just funds made available under subsection (b)(5)(A) (i.e., the new $500 million). This is clear because subsections (g)(3)(B)(i) and (g)(3)(B)(ii), which allow deobligation if a state e.g. “demonstrates an insufficient level of performance, or wasteful or fraudulent spending,” clearly allow for deobligation of all of a state’s BEAD funding rather than just the new $500 million tranch. 

So what has changed?

The most significant consequence of the update to the deobligation provision is that any state that does not accept any of the new $500 million appropriation is now clearly not subject to having existing BEAD funds clawed back for noncompliance with the moratorium. As we noted in our previous commentary, the previous text would have required compliance with the moratorium if Commerce deobligated existing BEAD funds for e.g. “wasteful or fraudulent spending” and then re-obligated them. That would not be possible under the new text. 

In other words, states would clearly be able to opt out of compliance with the moratorium by choosing not to accept their share of the newly appropriated BEAD money. As other authors have noted, this would mean that wealthy states with a strong appetite for AI regulation, like New York and California, could pass on the new funding and continue to enact and enforce AI laws while less wealthy and more rural states might accept the additional BEAD funding in exchange for ceasing to regulate. And if technological progress and the emergence of new risks from AI caused any states that originally accepted their share of the $500 million to later change course and begin to regulate, they could potentially have all of their previously obligated BEAD funding clawed back.

The AI Moratorium—deobligation issues, BEAD funding, and independent enforcement

There’s been a great deal of discussion in recent weeks about the controversial proposed federal moratorium on state laws regulating AI. The most recent development is that the moratorium has been amended to form a part of the Broadband Equity, Access, and Deployment (BEAD) program. The latest draft of the moratorium, which recently received the go-ahead from the Senate Parliamentarian, appropriates an additional $500 million in BEAD funding, to be obligated to states that comply with the moratorium’s requirement not to enforce laws regulating AI models, systems, or “automated decision systems.” This commentary discusses two pressing legal questions that have been raised about the new moratorium language—whether it affects the previously obligated $42.45 billion in BEAD funding in addition to the $500 million in new funding, and whether private parties will be able to sue to enforce the moratorium.

Does the Moratorium affect existing BEAD funding, or only the new $500M?

One issue that has caused some confusion among commentators and policymakers is precisely how compliance or noncompliance with the moratorium will affect states’ ability to keep and spend the $42.45 billion in BEAD funding that has previously been obligated

It is true that subsection (p) specifies that only amounts made available “On and after the date of enactment of this subsection” (in other words, the new $500m appropriation and any future appropriations) depend on compliance with the moratorium. However, the moratorium would also add a new provision to subsection (g), which covers “deobligation of awards.” This new provision states that Commerce may deobligate (i.e. withdraw) “grant funds awarded to an eligible entity that… is not in compliance with subsection (q) or (r).” This deobligation provision clearly and unambiguously applies to all $42.45 billion in previously obligated BEAD funding, in addition to the new $500 million. Subsection (g) amends the existing BEAD deobligation rules, not just the moratorium. And while subsections (p) and (q) affect only states that accept new obligations “on or after the enactment” of the bill, subsection (g) applies to all “grant funds” with no limitation on the funds source or timing.

So, any state that is not in compliance with subsection (q)—which includes any state that accepts any portion of the newly appropriated $500m and is later determined to have violated the moratorium, even unintentionally—could face having all of its previously obligated BEAD funding clawed back by Commerce, rather than just its portion of the new $500 million appropriation.

Additionally, it is possible that even states that choose not to accept any of the new $500 million could be affected, if Commerce deobligates previously obligated funds for reasons such as “an insufficient level of performance, or wasteful or fraudulent spending.” If this occurred, then any re-obligation of the clawed-back funds would require compliance with the moratorium. In other words, Commerce could attempt to use a state’s entire portion of the $42.45 billion BEAD funding amount as a cudgel to coerce states into complying with the moratorium and agreeing not to regulate AI models, systems, or “automated decision systems.”

Can private parties enforce the moratorium?

Probably. Various commentators have argued that the moratorium cannot be enforced by private parties, or that the Secretary of Commerce will, in his discretion, determine how vigorously the moratorium will be enforced. But the plain text of the provision, and applicable legal precedents, indicate that private parties will likely be entitled to enforce the prohibition on state AI regulation as well.

Stripping the provision down to its essentials, subsection (q) states that “no eligible entity or political subdivision thereof . . . may enforce . . . any law or regulation . . . limiting, restricting or otherwise regulating artificial intelligence models, [etc.].” That is a clear prohibition. It doesn’t mention the Department of Commerce. Nor does it leave it to the Secretary’s discretion whether that prohibition applies. If states satisfy the criteria, they are prohibited from enforcing laws restricting AI.

Nothing in the proposed moratorium or in 47 U.S.C. § 1702 generally provides that the only remedy for a violation of the moratorium is deobligation of obligated funds by the Assistant Secretary of Commerce for Communications and Information. And when comparable laws—e.g. the Airline Deregulation Act, 49 U.S.C. § 41713—have used similar language to expressly preempt state AI laws, courts have interpreted this as authorizing private parties to sue for an injunction preventing enforcement of preempted state laws. See, for example, Morales v. Trans World Airlines, Inc., 504 U.S. 374 (1992).

The case for AI liability

The debate over AI governance has intensified following recent federal proposals for a ten-year moratorium on state AI regulations. This preemptive approach threatens to replace emerging accountability mechanisms with a regulatory vacuum.

In his recent AI Frontiers article, Kevin Frazier argues in favor of a federal moratorium, seeing it as necessary to prevent fragmented state-level liability rules that would stifle innovation and disadvantage smaller developers. Frazier (an AI Innovation and Law Fellow at the University of Texas, Austin, School of Law) also contends that, because the norms of AI are still nascent, it would be premature to rely on existing tort law for AI liability. Frazier cautions that judges and state governments lack the technical expertise and capacity to enforce liability consistently.

But while Frazier raises important concerns about allowing state laws to assign AI liability, he understates both the limits of federal regulation and the unique advantages of liability. Liability represents the most suitable policy tool for addressing many of the most pressing risks posed by AI systems. Its superiority stems from three basic advantages. Specifically, liability can:

Frazier correctly observes that “societal norms around AI are still forming, and the technology itself is not yet fully understood.” However, I believe he draws the wrong conclusion from this observation. The profound disagreement among experts, policymakers, and the public about AI risks and their severity does not argue against using liability frameworks to curb potential abuses. On the contrary, it renders their use indispensable.

Disagreement and Uncertainty

The disagreement about AI risks reflects more than differences in technical assessment. It also encompasses fundamental questions about the pace of AI development, the likelihood of catastrophic outcomes, and the appropriate balance between innovation and precaution. Some researchers argue that advanced AI systems pose high-probability and imminent existential threats, warranting immediate regulatory intervention. Others contend that such concerns are overblown, arguing that premature regulation could stifle beneficial innovation.

Such disagreement creates paralysis in traditional regulatory approaches. Prescriptive regulation designed to address risks before they become reality — known in legal contexts as “ex ante,” meaning “before the fact” — generally entails substantial up-front costs that increase as rules become stricter. Passing such rules requires social consensus about the underlying risks and the costs we’re willing to bear to mitigate them.

When expert opinions vary dramatically about foundational questions, as they do in the case of AI, regulations may emerge that are either ineffectively permissive or counterproductively restrictive. The political process, which tends to amplify rather than resolve such disagreements, provides little guidance for threading this needle effectively.

Approval-based systems face similar challenges. In an approval-based system (for example, Food and Drug Administration regulations of prescription drugs), regulators must formally approve new products and technologies before they can be used. Thus, they depend on regulators’ ability to distinguish between acceptable and unacceptable risks — a difficult task when the underlying assessments remain contested.

Liability systems, by contrast, operate effectively even amid substantial disagreements. They do not require ex ante consensus about appropriate risk levels; rather, they assign “ex post” accountability. Liability scales automatically with risk, as revealed in cases where individual plaintiffs suffer real injuries. This obviates the need for ex ante resolution of wide social disagreement about the magnitude of AI risks.

Thus, while Frazier and I agree that governments have limited expertise in AI risk management, this actually strengthens rather than undermines the case for liability, which harnesses private-sector expertise through market incentives rather than displacing it through prescriptive rules.

Reasonable Care and Strict Liability

Frazier and I also share some common ground regarding the limits of negligence-based liability. Traditional negligence doctrine imposes a duty to exercise “reasonable care,” typically defined as the level of care that a reasonable person would exercise under similar circumstances. While this standard has served tort law well across many domains, AI systems present unique challenges that may render conventional reasonable care analysis inadequate for managing the most significant risks.

In practice, courts tend to engage in a fairly narrow inquiry when assessing whether a defendant exercised reasonable care. If an SUV driver runs over a pedestrian, courts generally do not inquire as to whether the net social benefits of this particular car trip justified the injury risk it generated for other road users. Nor would a court ask whether the extra benefits of driving an SUV (rather than a lighter-weight sedan) justified the extra risks the heavier vehicle posed to third parties. Those questions are treated as outside the scope of the reasonable care inquiry. Instead, courts focus on questions like whether the driver was drunk, or texting, or speeding.

In the AI context, I expect a similarly narrow negligence analysis that asks whether AI companies implemented well-established alignment techniques and safety practices. I do not anticipate questions about whether it was reasonable to develop an AI system with certain high-level features, given the current state of AI alignment and safety knowledge.

However, while negligence is limited in its ability to address broader upstream culpability, liability can still tackle it. Under strict liability, defendants internalize the full social costs of their activities. This structure incentivizes investment in precaution up to the point where marginal costs equal marginal benefits. Such an alignment between private and social incentives proves especially valuable when reasonable care standards may systematically underestimate the optimal level of precaution.

Accounting for Third-Party Harms

Another key feature of liability systems is their capacity to address third-party harms: situations where AI systems cause damage to parties who have no contractual or other market relationship with the system’s operator. These scenarios present classic market failure problems where private incentives diverge sharply from social welfare — warranting some sort of policy intervention.

When AI systems harm their direct users, market mechanisms provide some corrective pressure. Users who experience harms from AI systems can take their business to competitors, demand compensation, or avoid such systems altogether. While these market responses may be imperfect — particularly when harms are difficult to detect or when users face switching costs — they do provide an organic feedback mechanism, incentivizing AI system operators to invest in safety.

Third-party harms present an entirely different dynamic. In such cases, the parties bearing the costs of system failures have no market leverage to demand safer design or operation. AI developers, deployers, and users internalize the benefits of their activities — revenue from users, cost savings from automation, competitive advantages from AI capabilities — while externalizing many of the costs onto third parties. Without policy intervention, this leads to systematic underinvestment in safety measures that protect third parties.

Liability systems directly address this externality problem by compelling AI system operators to internalize the costs they impose on third parties. When AI systems harm people, liability rules require AI companies to compensate victims. This induces AI companies to invest in safety measures that protect third parties. AI companies themselves are best positioned to identify such measures, with the range of potential mitigations including high-level system architecture changes, investing more in alignment and interpretability research, and testing and red-teaming new models before deployment, potentially including broad internal deployment.

The power of this mechanism is clear when compared with alternative approaches to the problem of mitigating third-party harms. Prescriptive regulation might require regulators to identify appropriate risk-mitigation measures ex ante, a challenging task given the rapid evolution of AI technology. Approval-based systems might prevent the deployment of particularly risky systems, but they provide limited ongoing incentives for safety investment once systems are approved. Only liability systems create continuous incentives for operators to identify and implement cost-effective safety measures throughout the lifecycle of their systems.

Moreover, liability systems create incentives for companies to develop safety expertise that extends beyond compliance with specific regulatory requirements. Under prescriptive regulation, companies have incentives to meet specified requirements but little reason to exceed them. Under liability systems, companies have incentives to identify and address risks even when those risks are not explicitly anticipated by regulators. This creates a more robust and adaptive approach to safety management.

State-Level Liability

Frazier’s concerns about a patchwork of state-level AI regulation deserve serious examination, but his analysis overstates both the likelihood and the problematic consequences of such inconsistency. His critique conflates different types of regulatory requirements, while ignoring the inherent harmonizing features of liability systems.

First, liability rules exhibit greater natural consistency across jurisdictions than other forms of regulation do. Frazier worries about “ambiguous liability requirements” and companies needing to “navigate dozens of state-level laws.” However, the common-law tradition underlying tort law creates pressures toward harmonization that prescriptive regulations lack. Basic negligence principles — duty, breach, causation, and damages — remain remarkably consistent across states, despite the absence of a federal mandate.

More importantly, strict liability regimes avoid patchwork problems entirely. Under strict liability, companies bear responsibility for harm they cause, regardless of their precautionary efforts or the specific requirements they meet. This approach creates no compliance component that could vary across states. A company developing AI systems under a strict liability regime faces the same fundamental incentive everywhere: Make your systems safe enough to justify the liability exposure they create.

Frazier’s critique of Rhode Island Senate Bill 358, which I helped design, reflects some mischaracterization of its provisions. The bill is designed to close a gap in current law where AI systems may engage in wrongful conduct, yet no one may be liable.

Consider an agentic AI system that a user instructs to start a profitable internet business. The AI system determines that the easiest way to do this is to send out phishing emails and steal innocent people’s identities. It also covers its tracks, so reasonable care on the part of the user would neither prevent nor detect this activity. In such a case, current Rhode Island law would require the innocent third-party plaintiffs to prove that the developers failed to adopt some specific precautionary measure that would have prevented the injury, which may not be possible.

Under SB 358, it would be sufficient for the plaintiff to prove that the AI system’s conduct would be a tort if a human engaged in it, and that neither the user nor an intermediary that fine-tuned or scaffolded the model had intended or could have reasonably foreseen the system’s tortious conduct. That is, the bill holds that when AI systems wrongfully harm innocent people, someone should be liable. If the user and any intermediaries that modified the system are innocent, the buck should stop with the model developer.

One concern with this approach is that the elements of some torts implicate the mental states of the defendant, and many people doubt that AI systems can be understood as having any mental states at all. For this reason, SB 358 creates a rebuttable presumption that, in cases where the judge or jury would infer that a human possessed the relevant mental state if they engaged in conduct similar to that of the AI system, then that same inference should also apply to AI mental states.

AI Federalism

While state-level AI liability represents a significant improvement over the current regulatory vacuum, I do think there is an argument for federalizing AI liability rules. Alternatively, more states could adopt narrow, strict liability legislation (like Rhode Island SB 358) that would help close the current AI accountability gap.

A federal approach could provide greater consistency and reflect the national scope of AI system deployment. Federal legislation could also more easily coordinate liability rules with other aspects of AI governance, such as liability insurance requirements, safety testing requirements, disclosure obligations, and government procurement standards.

However, the case for federalization is not an argument against liability as a policy tool. Whether implemented at the state level or the federal level, liability systems offer unique advantages for managing AI risks that other regulatory approaches cannot match. The key insight is not that liability must be federal to be effective, but rather that liability — at whatever level — represents a superior approach to AI governance than either prescriptive regulation or approval-based systems.

Frazier’s analysis culminates in support for federal preemption of state-level AI liability, noting that the US House reconciliation bill includes “a 10-year moratorium on a wide range of state AI regulations.” But this moratorium would replace emerging state-level accountability mechanisms with no accountability at all.

The proposed 10-year moratorium would leave two paths for responding to AI risks. One path would be for Congress to pass federal legislation. Confidence in such a development would be misplaced given Congress’s track record on technology regulation. 

The second path would be to accept a regulatory vacuum where AI risks remain entirely unaddressed through legal accountability mechanisms. Some commentators (I’m not sure if Frazier is among them) actively prefer this laissez-faire scenario to a liability-based governance framework, claiming that it best promotes innovation to unlock the benefits of AI. This view is deeply mistaken. Concerns that liability will chill innovation are overstated. If AI holds the promise that Frazier and I think it does, there will still be very strong incentives to invest in it, even after developers fully internalize the technology’s risks.

What we want to promote is socially beneficial innovation that does more good than harm. Making AI developers pay when their systems cause harm balances their incentives and advances this larger goal. (Similarly, requiring companies to pay for the harms of pollution makes sense, even when that pollution is a byproduct of producing useful goods or services like electricity, steel, or transportation.)

In a world of deep disagreement about AI’s risks and benefits, abandoning emerging liability mechanisms risks creating a dangerous regulatory vacuum. Liability’s unique abilities — adapting dynamically, incentivizing optimal safety investments, and addressing third-party harms — makes it indispensable. Whether at the state level or the federal level, liability frameworks should form the backbone of any effective AI governance strategy.