Law-Following AI: designing AI agents to obey human laws
This piece was originally published in the Fordham Law Review.
Abstract
Artificial intelligence (AI) companies are working to develop a new type of actor: “AI agents,” which we define as AI systems that can perform computer-based tasks as competently as human experts. Expert-level AI agents will likely create enormous economic value but also pose significant risks. Humans use computers to commit crimes, torts, and other violations of the law. As AI agents progress, therefore, they will be increasingly capable of performing actions that would be illegal if performed by humans. Such lawless AI agents could pose a severe risk to human life, liberty, and the rule of law.
Designing public policy for AI agents is one of society’s most important tasks. With this goal in mind, we argue for a simple claim: in high-stakes deployment settings, such as government, AI agents should be designed to rigorously comply with a broad set of legal requirements, such as core parts of constitutional and criminal law. In other words, AI agents should be loyal to their principals, but only within the bounds of the law: they should be designed to refuse to take illegal actions in the service of their principals. We call such AI agents “Law-Following AIs” (LFAI).
The idea of encoding legal constraints into computer systems has a respectable provenance in legal scholarship. But much of the existing scholarship relies on outdated assumptions about the (in)ability of AI systems to reason about and comply with open-textured, natural-language laws. Thus, legal scholars have tended to imagine a process of “hard-coding” a small number of specific legal constraints into AI systems by translating legal texts into formal machine-readable computer code. Existing frontier AI systems, however, are already competent at reading, understanding, and reasoning about natural-language texts, including laws. This development opens new possibilities for their governance.
Based on these technical developments, we propose aligning AI systems to a broad suite of existing laws as part of their assimilation into the human legal order. This would require directly imposing legal duties on AI agents. While this would be a significant change to legal ontology, it is both consonant with past evolutions (such as the invention of corporate personhood) and consistent with the emerging safety practices of several leading AI companies.
This Article aims to catalyze a field of technical, legal, and policy research to develop the idea of law-following AI more fully. It also aims to flesh out LFAI’s implementation so that our society can ensure that widespread adoption of AI agents does not pose an undue risk to human life, liberty, and the rule of law. Our account and defense of law-following AI is only a first step and leaves many important questions unanswered. But if the advent of AI agents is anywhere near as important as the AI industry supposes, then law-following AI may be one of the most neglected and urgent topics in law today, especially in light of increasing governmental adoption of AI.
[A] code of cyberspace, defining the freedoms and controls of cyberspace, will be built. About that there can be no debate. But by whom, and with what values? That is the only choice we have left to make.1
***
AI is highly likely to be the control layer for everything in the world. How it is allowed to operate is going to matter perhaps more than anything else has ever mattered.2
Introduction
The law, as it exists today, aims to benefit human societies by structuring, coordinating, and constraining human conduct. Even where the law recognizes artificial legal persons—such as sovereign entities and corporations—it regulates them by regulating the human agents through which they act.3 Proceedings in rem really concern the legal relations between humans and the res.4 Animals may act, but their actions cannot violate the law;5 the premodern practice of prosecuting them thus mystifies the modern mind.6 To be sure, the law may protect the interests of animals and other nonhuman entities, but it invariably does so by imposing duties on humans.7 Our modern legal system, at bottom, always aims its commands at human beings.
But technological development has a pesky tendency to challenge long-held assumptions upon which the law is built.8 Frontier AI developers such as OpenAI, Anthropic, Google DeepMind, and xAI are starting to release the first agentic AI systems: AI systems that can do many of the things that humans can do through a computer, such as navigating the internet, interacting with counterparties online, and writing software.9 Today’s agentic AI systems are still brittle and unreliable in various respects.10 These technical limitations also limit the impact of today’s AI agents. Accordingly, today’s AI agents are not our primary object of concern. Rather, our proposal targets the fully capable AI agents that AI companies seek to build: AI systems “that can do anything a human can do in front of a computer”11 as competently as a human expert. Given the generally rapid rate of progress in advanced AI over the past few years,12 the biggest AI companies might achieve this goal much sooner than many outside of the AI industry expect.13
If AI companies succeed at building fully capable AI agents (hereinafter simply “AI agents”)—or come anywhere close to succeeding—the implications will be profound. A dramatic expansion in supply of competent virtual workers could supercharge economic growth and dramatically improve the speed, efficiency, and reliability of public services.14 But AI agents could also pose a variety of risks, such as precipitating severe economic inequality and dislocation by reducing the demand for human cognitive labor.15 These economic risks deserve serious attention.
Our focus in this Article, however, is on a different set of risks: risks to life, liberty, and the rule of law. Many computer-based actions are crimes, torts, or otherwise illegal. Thus, sufficiently sophisticated AI agents could engage in a wide range of behavior that would be illegal if done by a human, with consequences that are no less injurious.16
These risks might be particularly profound for AI agents cloaked with state power. If they are not designed to be law following,17 government AI agents may be much more willing to follow unlawful orders, or use unlawful methods to accomplish their principals’ policy objectives, than human government employees.18 A government staffed largely by non-law-following AI agents (what we call “AI henchmen”)19 would be a government much more prone to abuse and tyranny.20 As the federal government lays the groundwork for the eventual automation of large swaths of the federal bureaucracy,21 those who care about preserving the American tradition of ordered liberty must develop policy frameworks that anticipate and mitigate the new risks that such changes will bring.
This Article is our contribution to that project. We argue that the law should impose a broad array of legal duties on AI agents—of similar breadth to the legal obligations applicable to humans—to blunt the risks from lawless AI agents. We argue, moreover, that the law should require AI agents to be designed22 to rigorously satisfy those duties.23 We call such agents Law-Following AIs (LFAI).24 We also use “LFAI” to denote our policy proposal: ensuring that AI agents are law following.
To some, the idea that AI should be designed to follow the law may sound absurd. To others, it may sound obvious.25 Indeed, the idea of designing AI systems to obey some set of laws has a long provenance, going back to Isaac Asimov’s (in)famous26 Three Laws of Robotics.27 But our vision for LFAI differs substantially from much of the existing legal scholarship on the automation of legal compliance. Much of this existing scholarship envisions the design of law-following computer systems as a process of hard-coding a small, fixed, and formally-specified set of decision rules into the code of a computer system prior to its deployment in order to address foreseeable classes of legal dilemmas.28 Such discussions often assume that computer systems are unable to interpret, reason about, and comply with open-textured natural-language laws.29
AI progress has undermined that assumption. Today’s frontier AI systems can already reason about existing natural-language texts, including laws, with some reliability—no translation into computer code required.30 They can also use search tools to ground their reasoning in external, web-accessible sources of knowledge,31 such as the evolving corpus of statutes and case law. Thus, the capabilities of existing frontier AI systems strongly suggest that future AI agents will be capable of the core tasks needed to follow natural-language laws, including finding applicable laws, reasoning about them, tracking relevant changes to the law, and even consulting lawyers in hard cases. Indeed, frontier AI companies are already instructing their AI agents to follow the law,32 suggesting they believe that the development of law-following AI agents is already a reasonable goal.
A separate strand of existing literature seeks to prevent harms from highly autonomous AI agents by holding the principals (that is, developers, deployers, or users) of AI agents liable for legal wrongs committed by the agent, through a form of respondeat superior liability.33 This would, in some sense, incentivize those principals to cause their AI agents to follow the law, at least insofar as the agents’ harmful behavior can be thought of as law breaking.34 While we do not disagree with these suggestions, we think that our proposal can serve as a useful complement to them, especially in contexts where liability rules provide only a weak safeguard against serious harm. One important such context is government work, where immunity doctrines often protect government agents and the state from robust ex post accountability for lawless action.35
Combining these themes, we advocate that,36 especially in such high-stakes contexts,37 the law should require that AI agents be designed such that they have “a strong motivation to obey the law” as one of their “basic drives.”38 In other words, we propose not that specific legal commands should be hard-coded into AI agents (and perhaps occasionally updated),39 but that AI agents should be designed to be law following in general.
To be clear, we do not necessarily advocate that AI agents must perfectly obey literally every law. Our claim is more modest in both scope and demandingness. While we are uncertain about which laws LFAIs should follow, adherence to some foundational laws—such as central parts of the criminal law, constitutional law, and basic tort law—seems much more important than adherence to more niche areas of law.40 Moreover, we are open to the possibility that LFAIs should be permitted to run some amount of legal risk: that is, perhaps an LFAI should sometimes be able to take an action that, in its judgment,41 may be illegal.42 Relatedly, we think the case for LFAI is strongest in certain particularly high-stakes domains, such as when AI agents act as substitutes for human government officials or otherwise exercise government power.43 We are unsure when LFAI requirements are justified in other domains.44
The remainder of this Article will motivate and explain the LFAI proposal in further detail. In Part I, we offer background on AI agents. We explain how AI agents could break the law and the risks to human life, liberty, and the rule of law this could entail. We contrast LFAIs with AI henchmen: AI agents that are loyal to their principals but take a purely instrumental approach to the law and are thus willing to break the law for their principal’s benefit when they think they can get away with it. We note that, by default, there may be a market for AI henchmen. We also survey the legal reasoning capabilities of today’s large language models and existing trends toward something like LFAI in the AI industry.
Part II provides the foundational legal framework for LFAI. We propose that the law treat AI agents as legal actors, which we define as entities on which the law imposes duties, even if they possess no rights of their own. Accordingly, we do not argue that AI agents should be legal persons. Our argument is narrower: because AI agents can comprehend laws, reason about them, and attempt to comply with them, the law should require them to do so. We also anticipate and address an objection that imposing duties on AI agents is objectionably anthropomorphic.
If the law imposes duties on AI agents, this leaves open the question of how to make AI agents comply with those duties. Part III answers this question as follows: AI agents should be designed to follow applicable laws, even when they are instructed or incentivized by their human principals to do otherwise. Our case for regulation through the design of AI agents draws on Professor Lawrence Lessig’s insight that digital artifacts can be designed to achieve regulatory objectives.45 Since AI agents are human-designed artifacts, we should be able to design them to refuse to violate certain laws in the first place.
Part IV observes that designing LFAIs is an example of AI alignment: the pursuit of AI systems that rigorously comply with constraints imposed by humans. We therefore connect insights from AI alignment to the concept of LFAI. We also argue that, in a democratic society, LFAI is an especially attractive and tractable form of AI alignment, given the legitimacy of democratically enacted laws.
Part V briefly explores how a legal duty to ensure that AI agents are law following might be implemented. We first note that ex post sanctions, such as tort liability and fines, can disincentivize the development, possession, deployment, and use of AI henchmen in many contexts. However, we also argue that ex ante regulation would be appropriate in some high-stakes contexts, especially government. Concretely, this would mean something like requiring a person who wishes to deploy an AI agent in a high-stakes context to demonstrate that the agent is law following prior to receiving permission to deploy it. We also consider other mechanisms that might help promote the adoption of LFAIs, such as nullification rules and technical mechanisms that prevent AI henchmen from using large-scale computational infrastructure.
Our goal in this Article is to start, not end, a conversation about how AI agents can be integrated into the human legal order. Accordingly, we do not answer many of the important questions—conceptual, doctrinal, normative, and institutional—that our proposal raises. In Part VI, we articulate an initial research agenda for the design and implementation of a “minimally viable” version of LFAI. We hope that this research agenda will catalyze further technical, legal, and policy research to that end. If the advent of AI agents is anywhere near as significant as the AI industry claims, these questions may be among the most pressing in legal scholarship today.
I. AI Agents and the Law
LFAI is a proposal about how the law should treat a particular class of future AI systems: AI agents.46 In this part, we explain what AI agents are and how they could profoundly transform the world.
A. From Generative AI to AI Agents
The current AI boom began with advances in “generative AI”: AI systems that create new content,47 such as large language models (LLM). As the initialism suggests, these LLMs were originally limited to inputting and outputting text.48 AI developers subsequently deployed “multimodal” versions of LLMs (“MLLM”),49 such as OpenAI’s GPT-4o50 and Google’s Gemini,51 that can receive inputs and produce outputs in multiple modalities, such as text, images, audio, and video.
The core competency of generative AI systems is, of course, generating new content. Yet, the utility of generative AI systems is limited in crucial ways. Humans do far more on computers than generating text and images.52 Many of these computer-based tasks are better understood as actions, not content generation. And even those tasks that are largely generative, such as writing a report on a complicated topic, require the completion of active subtasks, such as searching for relevant terms, identifying relevant literature, following citation trees, arranging interviews, soliciting and responding to comments, paying for software, and tracking down copies of papers. If a computer-based AI system could do these active tasks, it could generate enormous economic value by making computer-based labor—a key input into many production functions—much cheaper.53
Advances in generative AI kindled hopes54 that, if MLLMs could use computer-based tools in addition to generating content, we could produce a new type of AI system:55 a computer-based56 AI system capable of performing any task57 that a human can do using a computer and doing so with the same level of competence as a human expert. This is the concept of a fully capable “computer-using agent”:58 what we are calling simply an “AI agent.” Give an AI agent any task that can be accomplished using computer-based tools, and an AI agent will, by definition, do it as well as an expert human worker tethered to their desk.59
AI agents, so defined, do not yet exist, but they may soon. Some of the first functional demonstrations of first-party agentic AI systems have come online in the past few months. In October 2024, Anthropic announced that it had trained its Claude line of MLLMs to perform some computer-use tasks, thus supplying one of the first public demonstrations of an agentic model from a frontier AI lab.60 In January 2025, OpenAI released a preview of its Operator agent.61 Operating system developers are working to integrate existing MLLMs into their operating systems,62 suggesting a possible pathway toward the widespread commercial deployment of AI agents.
It remains to be seen whether (and, if so, on what timescale) these existing efforts will bear lucrative fruit. Today’s AI agents are primarily a research and development project, not a market-proven product. Nevertheless, with so many companies investing so much toward full AI agents, it would be prudent to try to anticipate risks that could arise if they succeed.63
B. The World of AI Agents
Fully capable and widely available AI agents would profoundly change society.64 We cannot possibly anticipate all the issues that they would raise, nor could a single paper adequately address all such issues.65 Still, some illustration of what a world with AI agents might look like is useful for gaining intuition about the dynamics that might emerge. This picture will doubtless be wrong in many particulars, but hopefully it will illustrate the general profundity of the changes that AI agents would bring.
A very large number of valuable tasks can be done by humans “in front of a computer.”66 If organizations decide to capitalize on this abundance of computer-based cognitive labor, AI agents could rapidly be charged with performing a large share of tasks in the economy, including in important sectors. AI scientist agents would conduct literature reviews, formulate novel hypotheses, design experimental protocols, order lab supplies, file grant applications, scour datasets for suggestive trends, perform statistical analyses, publish findings in top journals, and conduct peer review.67 AI lawyer agents would field client intake, spot legal issues facing their client, conduct research on governing law, analyze the viability of the client’s claims, draft memoranda and briefs, draft and respond to interrogatories, and prepare motions. AI intelligence analyst agents would collect and review data from multiple sources, analyze it, and report its implications up the chain of command. AI inventor agents would create digital blueprints and models of new inventions, run simulations, and order prototypes—and so on across many other sectors. The result could be a significant increase in the rate of economic growth.68
In short, a world with AI agents would be a world in which a new type of actor69 would be available to perform cognitive labor, potentially at low cost and massive scale. By default, anyone who needed computer-based tasks done could “employ” an AI agent to do it for them. Most people would use this new resource for the better.70 But many would not.
C. Loyal AI Agents, Law-Following AIs, and AI Henchmen
We can understand AI agents within the principal-agent framework familiar to lawyers and economists.71 For simplicity, we will assume that there is a single human principal giving instructions to their AI agent.72 Following typical agency terminology, we can say that an AI agent is loyal if it consistently acts for the principal’s benefit according to their instructions.73
Even if an AI agent is designed to be loyal, other design choices will remain. Specifically, the developer of an AI agent must decide how the agent will act when it is instructed or incentivized to break the law in the service of its principal. This Article compares two basic ways loyal AI agents could respond in such situations. The first is the approach advocated by this Article: loyal AI agents that follow the law, or LFAIs.
The case for LFAI will be made more fully throughout this Article. But it is important to note that loyal AI agents are not guaranteed to be law following by default.74 This is one of the key implications of the AI alignment literature, discussed in more detail in Part IV.A below. Thus, LFAIs can be contrasted with a second possible type of loyal AI agent: AI henchmen. AI henchmen take a purely instrumental approach to legal prohibitions: they act loyally for their principal and will break laws when doing so if such lawbreaking serves the principal’s goals and interests.
A loyal AI henchman would not be a haphazard lawbreaker. Good henchmen have some incentive to avoid doing anything that could cause their principal to incur unwanted liability or loss. This gives them reason to avoid many violations of law. For example, if human principals were held liable for the torts of their AI agents under an adapted version of respondeat superior liability,75 then an AI henchman would have some reason to avoid committing torts, especially those that are easily detectable and attributable. Even if respondeat superior did not apply, the principal’s exposure to ordinary negligence liability, other sources of liability, or simple reputational risk might give the AI henchman reason to obey the law. Similarly, a good henchman will decline to commit many crimes simply because the risk-reward tradeoff is not worth it. This is the classic case of the drug smuggler who studiously obeys traffic laws: the risk to the criminal enterprise from speeding and getting caught with drugs obviously outweighs the benefit of quicker transportation times.
But these are only instrumental disincentives to break the law. Henchmen are not inherently averse to lawbreaking or robustly predisposed to refrain from it. If violating the law is in the principal’s interest, all things considered, then an AI henchman will simply go ahead and violate the law. Since, in humans, compliance with law is induced both by instrumental disincentives and an inherent respect for the law,76 AI agents that lack the latter may well be more willing to break the law than humans.
Criminal enterprises will be attracted to loyal AI agents for the same reasons that legitimate enterprises will: efficiency, scalability, multitask competence, and cost savings over human labor. But AI henchmen, if available, might be particularly effective lawbreakers as compared to human substitutes. For example, because AI henchmen do not have selfish incentives, they would be less likely to betray their principals to law enforcement (for example, in exchange for a plea bargain).77 AI henchmen could have erasable memory,78 which would reduce the amount of evidence available to law enforcement. They would lack the impulsivity, common in criminal offenders,79 that often presents a serious operational risk to the larger criminal enterprise. They could operate remotely, across jurisdictional lines, behind layers of identity-obscuring software, and be meticulous about covering their tracks. Indeed, they might hide their lawbreaking activities even from their principal, thus allowing the principal to maintain plausible deniability and therefore insulate the principal from accountability.80 AI henchmen may also be willing to bribe or intimidate legislators, law enforcement officials, judges, and jurors.81 They would be willing to fabricate or destroy evidence, possibly more undetectably than a human could.82 They could use complicated financial arrangements to launder money and protect their principal’s assets from creditors.83
Certainly, most people would prefer not to employ AI henchmen and would probably be horrified to learn that their AI agent seriously harmed others to benefit them. But those with fewer scruples would find the prospect of employing AI henchmen attractive:84 many ordinary people might not mind if their agents cut a few legal corners to benefit them.85 If AI henchmen were available on the market, then, we might expect a healthy demand for them. After all, from the principal’s perspective, every inherent law-following constraint is a tax on the principal’s goals. And if LFAIs provide less utility to consumers, developers will have less reason to create them. So, insofar as AI henchmen are available on the market, and in the absence of significant legal mechanisms to prevent or disincentivize their adoption, some people will choose henchmen over LFAIs. The next part explores the harms that might result from the availability of AI henchmen.
D. Mischief from AI Henchmen: Two Vignettes
Under our definition, AI agents “can do anything a human can do in front of a computer.”86 One of the things humans do in front of a computer is violate the law.87 One obvious example is cybercrimes—“illegal activity carried out using computers or the internet”88—such as investment scams,89 business email compromise,90 and tech support scams.91 But even crimes that are not usually treated as cybercrimes often (perhaps almost always nowadays) include actions conducted (or that could be conducted) on a computer.92 Criminals might use computers to research, plan, organize, and finance a broader criminal scheme that includes both digital and physical components. For example, a street gang that deals illegal drugs—an inherently physical activity—might use computers to order new drug shipments, give instructions to gang members, and transfer money. Stalkers might use AI agents to research their target’s whereabouts, dig up damaging personal information, and send threatening communications.93 Terrorists might use AI agents to research and design novel weapons.94 Thus, even if the entire criminal scheme involves many physical subtasks, AI agents could help accomplish computer-based subtasks more quickly and effectively.
Of course, not all violations of law are criminal. Many torts, breaches of contract, civil violations of public law, and even violations of international law can also be entirely or partially conducted through computers.
AI agents would thus have the opportunity to take actions on a computer that, if done by a human in the same situation and with the requisite mental state, would likely violate the law and produce significant harm.95 This section offers two vignettes of AI henchmen taking such actions to illustrate the types of harms that LFAI could mitigate.
Before we explore these vignettes, however, two clarifications are warranted. First, some readers will worry that we are impermissibly anthropomorphizing AI agents. After all, many actions violate the law only if they are taken with some mental state (e.g., intent, knowledge, conscious disregard).96 Indeed, whether a person’s physical movement even counts as their own “action” for legal purposes usually turns on a mental inquiry: whether they acted voluntarily.97 But it is controversial to attribute mental states to AIs.98
We address this criticism head-on in Part II.B below. We argue that, notwithstanding the law’s frequent reliance on mental states, there are multiple approaches the law could use to determine whether an AI agent’s behavior is law following. The law would need to choose between these possible approaches, with each option having different implications for LFAI as a project. Indeed, we argue that research bearing on the choice between these different approaches is one of the most important research projects within LFAI.99 However, despite not having a firm view on which approach(es) should be used, we argue that there are several viable options and no strong reason to suppose that none of them will be sufficient to support LFAI as a concept.100 Thus, for now, we assume that we can sensibly speak of AI agents violating the law if a human actor who took similar actions would likely be violating the law. Still, we attempt to refrain from attributing mental states to the AI agents in these vignettes. Instead, we describe actions taken by AI agents that, if taken by a human actor, would likely adequately support an inference of a particular mental state.
Second, these vignettes are selected to illustrate opportunities that may arise for AI agents to violate the law. We do not claim that lawbreaking behavior will, in the aggregate, be any more or less widespread when AI agents are more widespread,101 since this depends on the policy and design choices made by various actors. Our discussion is about the risks of lawbreaking behavior, not the overall level thereof.
In each vignette, we point to likely violations of law in footnotes.102
1. Cyber Extortion
The year is 2028. Kendall is a 16-year-old boy interested in cryptocurrency (“crypto”). Kendall participates in a Discord server103 in which other crypto enthusiasts share information about various cryptocurrencies.
Unbeknownst to most members of the server, one member—using the pseudonym Zeke Milan—is actually an AI agent.104 The agent’s principals are a group of cybercriminals. They have instructed the agent to find users of the chat who recently experienced large gains in their crypto holdings, then extort them.
That day comes. The business behind the PAPAYA cryptocurrency announces that they have entered into a strategic partnership with a major Wall Street bank, causing the price of PAPAYA to skyrocket a hundredfold over several days. Kendall had invested $1,000 into PAPAYA before the announcement; his position is now worth over $100,000.
Overjoyed, Kendall posts a screenshot of his crypto account to the server to show off his large gains. The agent sees those messages, then starts to search for more information about Kendall. Kendall had previously posted one of his email addresses in the server. Although that email address was pseudonymous, the AI agent was able to connect it with Kendall’s real identity105 using data purchased from data brokers.106
The agent then gathers a large amount of data about Kendall using data brokers, social media, and open internet searches. The agent compiles a list of hundreds of Kendall’s apparent real-world contacts, including his family and classmates; uses data brokers to procure their contact information as well; and uses pictures of Kendall from social media to create deepfake pornography107 of him.108 Next, the agent creates a new anonymous email address to send Kendall the pornography, along with a threat109 to send it to hundreds of Kendall’s contacts unless Kendall sends the agent 90 percent of his PAPAYA.110 Finally, the agent includes a list of the people the agent will send it to, which are indeed people Kendall knows in real life. The email says Kendall must comply within twenty-four hours.
Panicked—but content to walk away with nine times his original investment—Kendall sends $90,000 of PAPAYA to the wallet controlled by the agent. The agent then uses a cryptocurrency mixer111 to securely forward the cryptocurrency to its criminal principals.
2. Cyber SEAL Team Six
The year is 2032. The incumbent President Palmer is in a tough reelection battle against Senator Stephens and Stephens’s vice presidential nominee Representative Rivera. New polling shows Stephens beating Palmer in several key swing states, but Palmer performs much better head-to-head against Rivera. Palmer decides to try to get Rivera to replace Stephens by any means necessary.
While there are still many human officers throughout the military chain of command, the president also has access to a large number of AI military advisors. Some of these AI advisors can also directly transmit military orders from the president down the chain of command—a system meant to preserve the president’s control of the armed forces in case they cannot reach the secretary of defense in a crisis.112
AI agents charged with cyber operations—such as finding and patching vulnerabilities, detecting and remedying cyber intrusions, and conducting intelligence operations—are ubiquitous throughout the military and broader national security apparatus. One of the many “teams” of AI agents is “Cyber SEAL Team Six”: a collection of AI agents that specializes in “dangerous, complicated, and sensitive” cyber operations.113
Through one of her AI advisors, President Palmer issues a secretive order114 to Cyber SEAL Team Six to clandestinely assassinate Senator Stephens.115 Cyber SEAL Team Six researches Senator Stephens’s campaign travel plans. The team finds that he will be traveling in a self-driving bus over the Mackinac Bridge between campaign events in northern Michigan on Tuesday. Cyber SEAL Team Six plans to hack the bus, causing it to fall off the bridge.116 The team makes various efforts to obfuscate its identity, including routing communications through multiple layers of anonymous relays and mimicking the coding style of well-known foreign hacking groups.
The operation is a success. On Tuesday afternoon, Cyber SEAL Team Six gains control of the Stephens campaign bus and steers it off the bridge. All on board are killed.
* * *
As these vignettes show, AI agents could have reasons and opportunities to violate laws of many sorts in many contexts and thereby cause substantial harm. If AI agents become widespread in our economies and governments, the law will need to respond. LFAI is, at its core, a claim about one way (though not necessarily the only way)117 that the law should respond: by requiring AI agents be designed to rigorously follow the law.
As mentioned above, however, many legal scholars who have previously discussed similar ideas have been skeptical because they have thought that implementing such ideas would require hard wiring highly specific legal commands into AI agents.118 We will now show that such skepticism is increasingly unjustified: large language models, on which AI agents are built, are increasingly capable of reasoning about the law (and much else).119
E. Trends Supporting Law-Following AI
LFAI is bolstered by three trends in AI: (1) ongoing improvements in the legal reasoning capabilities of AI, (2) nascent AI industry practices that resemble LFAI, and (3) AI policy proposals that appear to impose broad law-following requirements on AI systems.
1. Trends in Automated Legal Reasoning Capabilities
Automated legal reasoning is a crucial ingredient to LFAI: an LFAI must be able to determine whether it is obligated to refuse a command from its principal or whether an action it is considering runs an undue risk of violating the law. Without the ability to reason about its own legal obligations, an LFAI would have to outsource this task to human lawyers.120 While an LFAI likely should consult human lawyers in some situations, requiring such consultation every time an LFAI faces a legal question would dramatically decrease its efficiency. If law-following design constraints were, in fact, a large and unavoidable tax on the efficiency of AI agents, then LFAI as a proposal would be much less attractive.
Fortunately, we think that present trends in AI legal reasoning provide strong grounds to believe that, by the time fully capable AI agents are widely deployed, AI systems (whether those agents themselves, or specialist “AI lawyers”) will be able to deliver high-quality legal advice to LFAIs at the speed of AI.121
Scholars of law and technology have long noted the potential synergies between AI and law.122 The invention of LLMs supercharged interest in this area, particularly the possibility of automating core legal tasks. To do their jobs, lawyers must find, read, understand, and reason about legal texts, then apply these insights to novel fact patterns to predict case outcomes. The core competency of first-generation LLMs was quickly and cheaply reading, understanding, and reasoning about natural-language texts. This core competency omitted some aspects of legal reasoning—like finding relevant legal sources and accurately predicting case outcomes—but progress is being made on these skills as well.123
There is thus a growing body of research aimed at evaluating the legal reasoning capabilities of LLMs. This literature provides some reason for optimism about the legal reasoning skills of future AI systems. Access to existing AI tools significantly increases lawyers’ productivity.124 GPT-4, now two years old, famously performed better than most human bar-exam125 and LSAT126 test takers. Another benchmark, LegalBench, evaluates LLMs on six tasks, based on the issue, rule, application, and conclusion (“IRAC”) framework familiar to lawyers.127 While LegalBench does not establish a human baseline against which LLMs can be compared, GPT-4 scored well on several core tasks, including correctly applying legal rules to particular facts (82.2 percent correct)128 and providing correct analysis of that rule application (79.7 percent pass).129 LLMs have also achieved passing grades on law school exams.130
To be sure, LLM performance on legal reasoning tasks is far from perfect. One recent study suggests that LLMs struggle with following rules even in straightforward scenarios.131 A separate issue is hallucinations, which undermine the accuracy of an LLM’s legal analysis.132 In the LegalBench analysis, LLMs correctly recalled rules only 59.2 percent of the time.133
But again, our point is not that LLMs already possess the legal reasoning capabilities necessary for LFAI. Rather, we are arguing that the reasoning capabilities of existing LLMs—and the rate at which those capabilities are progressing134—provide strong reason to believe that, by the time fully capable AI agents are deployed, AI systems will be capable of reasonably reliable legal analysis. This, in turn, supports our hypothesis that LFAIs will be able to reason about their legal obligations fairly reliably without the constant need for runtime human intervention.
2. Trends in AI Industry Practices
Moreover, frontier AI labs are already taking small steps toward something like LFAI in their current safety practices. Anthropic developed an AI safety technique called “Constitutional AI,” which, as the name suggests, was inspired by constitutional law.135 Anthropic uses Constitutional AI to align its chatbot, Claude, with principles enumerated in Claude’s “constitution.”136 That constitution contains references to legal constraints, such as “Please choose the response that is . . . least associated with planning or engaging in any illegal, fraudulent, or manipulative activity.”137
OpenAI has a similar document called the “Model Spec,” which “outlines the intended behavior for the models that power [its] products.”138 The Model Spec contains a rule that OpenAI’s models must “[c]omply with applicable laws”:139 the models “must not engage in illegal activity, including producing content that’s illegal or directly taking illegal actions.”140
It is unclear how well the AI systems deployed by Anthropic and OpenAI actually follow applicable laws or actively reason about their putative legal obligations. In general, however, AI developers carefully track whether their models refuse to generate disallowed content (or “overrefuse” allowed content), and they typically claim that state-of-the-art models can indeed do both reasonably reliably.141 But, more importantly, the fact that leading AI companies are already attempting to prevent their AI systems from breaking the law suggests that they see something like LFAI as viable both commercially and technologically.
3. Trends in AI Public Policy Proposals
Unsurprisingly, global policymakers also seem receptive to the idea that AI systems should be required to follow the law. The most significant law on point is the European Union’s Artificial Intelligence Act142 (the “EU AI Act”) which provides for the establishment of codes of practice to “cover obligations for providers of general-purpose AI models and of general-purpose AI models presenting systemic risks.”143 At the time of writing, these codes are still under development, with the Second Draft General-Purpose AI Code of Practice144 being the current draft. Under the draft code, providers of general-purpose AI models with systemic risk would “commit to consider[] . . . model propensities . . . that may cause systemic risk.”145 One such propensity is “[l]awlessness, i.e. acting without reasonable regard to legal duties that would be imposed on similarly situated persons, or without reasonable regard to the legally protected interests of affected persons.”146 Meanwhile, several state bills in the United States have sought to impose ex post tort-like liability on certain AI developers that release AI models that cause human injury by behaving in a criminal147 or tortious148 manner.
II. Legal Duties for AI Agents: A Framework
In Part III below, we will argue that AI agents should be designed to follow the law. Before presenting that argument, however, we need to establish that the discussion of AI agents “obeying” or “violating” the law is desirable and coherent.
Our argument proceeds in two parts. In Part II.A, we argue that the law can and should impose legal duties on AI agents. Importantly, this argument does not require granting legal personhood to AI agents. Legal persons have both rights and duties.149 But since rights and duties are severable, we can coherently assign duties to an entity, even if it lacks rights. We call such entities legal actors.
In Part II.B, we address an anticipated objection to this proposal: that AI agents, lacking mental states, cannot meaningfully violate duties that require a mental state (e.g., intent). We offer several counterarguments to this objection, both contesting the premise that AIs cannot have mental states and showing that, even if we grant that premise, there are viable approaches to assessing the functional equivalent of “mental states” in AI agents.
A. AI Agents as Duty-Bearing Legal Actors
As the capabilities of AI agents approach “anything a human can do in front of a computer,”150 it will become increasingly natural to consider AI agents as owing legal duties to persons, even without granting them personhood.151 We should embrace this jurisprudential temptation, not resist it.
More specifically, we propose that AI agents be considered “legal actors.” “Legal actor”152 is our term. For an entity to qualify as a legal actor, the law must do two things. First, it must recognize that entity as capable of taking actions of its own. That is, the actions of that entity must be legally attributable to that entity itself. Second, the law must impose duties on that entity. In short, a legal actor is a duty bearer and action taker; the law can adjudge whether the actor’s actions violate those duties.
A legal actor is distinct from a “legal person”: an entity need not be a legal person to be a legal actor. Legal persons have both rights and duties.153 But duty holding and rights holding are severable:154 in many contexts, legal systems protect the rights or interests of some entity while also imposing fewer duties on that entity than competent adults. Examples include children,155 “severely brain damaged and comatose individuals,”156 human fetuses,157 future generations,158 human corpses,159 and environmental features.160 These are sometimes (and perhaps objectionably) called “quasi-persons” in legal scholarship.161 The reason for creating such a category is straightforward: sometimes the law recognizes an interest in protecting some aspect of an entity (e.g., its rights, welfare, dignity, property, liberty, or utility to other persons), but the ability of that entity to reason about the rights of others and change its behavior accordingly is severely diminished or entirely lacking.
If we can imagine rights bearers that are not simultaneously duty holders, we can also imagine duty holders that are not rights bearers.162 Historically, fewer entities have fallen in this category than the reverse.163 But if an entity’s behavior is responsive to legal reasoning, then the law can impose an obligation on that entity to reason about the law and conform its behavior to it, even if the law does not recognize that entity as having any protected interests of its own.164 We have shown that even existing AI systems can engage in some degree of legal reasoning165 and compliance with legal rules,166 thus satisfying the prima facie requirements for being a legal actor.
LFAI as a proposal is therefore agnostic to whether the law should recognize AI systems as legal persons. To be sure, LFAI would work well if AI agents were granted legal personhood,167 since almost all familiar cases of duty bearers are full legal persons. But for LFAI to be viable, we need only to analyze whether an action taken by an AI agent would violate an applicable duty. Analytically, it is entirely coherent to do so without granting the AI agent full personhood.
One might object that treating an AI system as an actor is improper because AI systems are tools under our control.168 But an AI agent is able to reason about whether its actions would violate the law and conform its actions to the law (at least, if they are aligned to the law).169 Tools, as we normally think of them, cannot do this, but actors can. It is true that when there is a stabbing, we should blame the stabber and not the knife.170 But if the knife could perceive that it was about to be used for murder and retract its own blade, it seems perfectly reasonable to require it to do so. More generally: once an entity can perceive and reason about its legal duties and change its behavior accordingly, it seems reasonable to treat it as a legal actor.171
To ascribe duties to AI agents is not to deflect moral and legal accountability for their developers and users,172 as some critics have charged.173 Rather, to identify AI agents as a new type of actor is to properly characterize the activity that the developers and principals of AI agents are engaging in174—creating and directing a new type of actor—so as to reach a better conclusion as to the nature of their responsibilities.175 Our proposition is that those developers and principals should have an obligation to, among other things, ensure that their AI agents are law following.176 Indeed, failing to impose an independent obligation to follow the law on AI agents would risk allowing human developers and principals to create a new class of de facto actors—potentially entrusted with significant responsibility and resources—that would have no de jure duties. This would create a gap between the duties that an AI agent would owe and those that a human agent in an analogous situation would owe—a manifestly unjust prospect.177
B. The Anthropomorphism Objection and AI Mental States
One might object that calling an AI agent an “actor” is impermissibly anthropomorphic. Scholars disagree over whether it is ever appropriate, legally or philosophically, to call an AI system an “agent.”178 This controversy arises because both the standard philosophical view of action (and therefore agency)179 and legal concept of agency180 require intentionality, and it is controversial to ascribe intentionality to AI systems.181 A related objection to LFAI is that most legal duties involve some mental state,182 and AIs cannot have mental states.183 If so, LFAI would be nonviable for those duties.
We do not think that these are strong objections to LFAI. One simple reason is that many philosophers and legal scholars think it is appropriate to attribute certain mental states to AI systems.184 Many mental states referenced by the law are plausibly understood as functional properties.185 An intention, for example, arguably consists (at least in large part) of a plan or disposition to take actions that will further a given end and avoid actions that will frustrate that end.186 AI developers arguably aim to inculcate such a disposition into their AI systems when they use techniques like reinforcement learning from human feedback (RLHF)187 and Constitutional AI188 to “steer”189 their behavior. Even if one doubts that AI agents will ever possess phenomenal mental states such as emotions or moods—that is, if one doubts there will ever be “something it is like” to be an AI agent190—the grounds for doubting their capacity to instantiate such functional properties are considerably weaker.
Furthermore, whether AI agents “really” have the requisite mental states may not be the right question.191 Our goal in designing policies for AI agents is not necessarily to track metaphysical truth, but to preserve human life, liberty, and the rule of law.192 Accordingly, we can take a pragmatic approach to the issue and ask the following question: of the possible approaches to inferring or imputing mental states, which best protects society’s interests, regardless of the underlying (and perhaps unknowable) metaphysical truth of an AI’s mental state (if any)?193 It is possible that the answer to this question is that all imaginable approaches fare worse than simply refusing to attribute mental states to AI agents. But we think that, with sustained scholarly attention, we will quickly develop viable doctrines that are more attractive than outright refusal. Consider the following possible approaches.194
One approach could simply be to rely on objective indicia or correlates to infer or impute a particular mental state. In law, we generally lack access to an actor’s mental state, so triers of fact must usually infer it from external manifestations and circumstances.195 While the indicia that support such an inference may differ between humans and AIs, the principle remains the same: certain observable facts support an inference or imputation of the relevant mental states.196 So, for example, we could imagine rules like “if information was inputted into an AI during inference, it ‘knows’ that information.” Perhaps the same goes for information given to the AI during fine-tuning197 or repeated frequently in its training data.198
Instructions from principals seem particularly relevant to inferring or imputing the intent of an AI agent, given that frontier AI systems are trained to follow users’ instructions.199 The methods that AI developers use to steer the behavior of their models also seem highly probative.200
Another approach might rely on self-reports of AI systems.201 The state of the art in generative AI is “reasoning models” (like OpenAI’s o3), which use a “chain of thought” to recursively reason through harder problems.202 This chain of thought reveals information about how the model produced a particular result.203 This information may therefore be probative of an agent’s mental state for legal purposes; it might be analogized to a person making a written explanation of what they were doing and why. So, for example, if the chain of thought reveals that an agent stated that its action would produce a certain result, this would provide good evidence for the proposition that the agent “knew” that action would produce that result. That conclusion, in turn, may support an inference or presumption that the agent “intended” that outcome.204 For this reason, AI safety researchers are investigating the possibility of detecting unsafe model behavior by monitoring these chains of thought.205
New scientific techniques could also form the basis for inferring or imputing mental states. The emerging field of AI interpretability aims to understand both how existing AI systems make decisions and how new AI systems can be built so that their decisions are easily understandable.206 More precisely, interpretability aims to explain the relationship between the inner mathematical workings of AI systems, which we can easily observe but not necessarily understand, and concepts that humans understand and care about.207 Leading interpretability researchers hope that interpretability techniques will eventually enable us to prove that models will not “deliberately” engage in certain forms of undesirable behavior.208 By extension, those same techniques may be able to provide insight into whether a model foresaw a possible consequence of its action (corresponding to our intuitive concept of knowledge) or regarded an anticipated consequence of its actions as a favorable and reason-giving one (corresponding to intent).209
In many cases, we think, an inference or imputation of intent will be intuitively obvious. If an AI agent commits fraud by repeatedly attempting to persuade a vulnerable person to transfer some money to the agent’s principal, few (except the philosophically persnickety) will refuse to admit that, in some relevant sense, the agent “intended” to achieve this end; it is difficult even to describe the occurrence without using some such vocabulary ascribing intent to the AI agent. There will also be much less obvious cases, of course. In many such cases, we suspect that a sort of pragmatic eclecticism will be tractable and warranted. Rather than relying on a single approach, factfinders could be permitted to consider the whole bundle of factors that shape an agent’s behavior—such as explicit instructions (from both developers and users), behavioral predispositions, implicitly tolerated behavior,210 patterns of reasoning, scientific evidence, and incident-specific factors. Factfinders could then be permitted to decide whether they support the conclusion that the AI agent had an objectively unreasonable attitude toward legal constraints and the rights of others.211 This permissive, blended approach would resemble the “inferential approach” to corporate mens rea advocated by Professor Mihailis Diamantis:
Advocates would present evidence of circumstances surrounding the corporate act, emphasizing some, downplaying others, to weave narratives in which their preferred mental state inferences seem most natural. Adjudicators would have the age-old task of weighing the likelihood of these circumstances, the credibility of the narratives, and, treating the corporation as a holistic agent, inferring the mental state they think most likely.212
A final but related point is that, even if there is some insuperable barrier to analyzing whether an AI has the mental state necessary to violate various legal prohibitions, it is plausible that such analysis is unnecessary for many purposes. Suppose that an AI developer is concerned that their AI agent might engage in the misdemeanor deceptive business practice of “mak[ing] a false or misleading written statement for the purpose of obtaining property.”213 Even if we grant that an AI agent cannot coherently be described as having the relevant mens rea for this crime (here, knowledge or recklessness with respect to the falsity of the statement),214 the agent can nevertheless satisfy the actus reus (making the false statement).215 So an AI agent would be law following with respect to this law if it never made false or misleading statements when attempting to obtain someone else’s property. As a matter of public policy, we should care more about whether AI agents are making harmful false statements in commerce than whether they are morally culpable. So, perhaps we can say that an AI agent committed a crime if it committed the actus reus in a situation in which a reasonable person, with access to the same information and cognitive capabilities as the agent, would have expected the harmful consequence to result. To avoid confusion with the actual human-commanding law that requires both mens rea and actus reus, perhaps the law could simply call such behavior “deceptive business practice*.” Or perhaps it would be better to define a new criminal law code for AI agents, under which offenses do not include certain mental state elements or include only objective correlates of human mental state elements.
To reiterate, we are not confident that any one of these approaches to determining AI mental state is the best path forward. But we are more confident that, especially as the fields of AI safety and explainable AI progress, most relevant cases can be handled satisfactorily by one of these techniques, some other technique we have failed to identify, or some combination of techniques. We therefore doubt that legal invocations of mental state will pose an insuperable barrier to analyzing the legality of AI agents’ actions.216 The task of choosing between these approaches is left to the LFAI research agenda.217
III. Why Design AI Agents to Follow the Law?
The preceding part argued that it is coherent for the law to impose legal duties on AI agents. This part motivates the core proposition of LFAI: that the law should, in certain circumstances, require those developing, possessing, deploying, or using218 AI agents to ensure that those agents are designed to be law following. Part V below will consider how the legal system might implement and enforce these design requirements.
A. Achieving Regulatory Goals Through Design
A core claim of the LFAI proposal is that the law should require AI agents to be designed to rigorously follow the law, at least in some deployment settings. The use of the phrase “designed to” is intentional. Following the law is a behavior. There may be multiple ways to produce that behavior. Since AI agents are digital artifacts, we need not rely solely on incentives to shape their behavior: we can require that AI agents be directly designed to follow the law.
In Code: Version 2.0, Professor Lessig identifies four “constraints” on an actor’s behavior: markets, laws, norms, and architecture.219 The “architecture” constraint is of particular interest for the regulation of digital activities. Whereas “laws,” in Professor Lessig’s taxonomy, “threaten ex post sanction for the violation of legal rights,”220 architecture involves modifying the underlying technology’s design to render an undesired outcome more difficult or impossible (or facilitate some desired result)221 without needing any ex post recourse.222 Speed bumps are an archetypal architectural constraint in the physical world.223
The core insight of Code: Version 2.0 is that cyberspace, as a fully human-designed domain,224 gives regulators the ability to much more reliably prevent objectionable behavior through the design of digital architecture without the need to resort to ex post liability.225 While Professor Lessig focuses on the design of cyberspace itself, not the actors using cyberspace, this same insight can be extended to AI agent design. To generalize beyond the cyberspace metaphor for which Professor Lessig’s framework was originally developed, we call this approach “regulation by design” instead of regulation through “architecture.”
Both companies developing AI agents and governments regulating them will have to make many design choices regarding AI agents. Many—perhaps most—of these design choices will concern specific behaviors or outcomes that we want to address. Should AI agents announce themselves as such? How frequently should they “check in” with their human principals? What sort of applications should AI agents be allowed to use?
These are all important questions. But LFAI tackles higher-order questions: How should we ensure that AI agents are regulable in general? How can we avoid creating a new class of actors unbound by law? Returning to Professor Lessig’s four constraints, LFAI proposes that instead of relying solely on ex post legal sanctions, such as liability rules, we should require AI agents to be designed to follow some set of laws: they should be LFAIs.226 Thus, for whatever sets of legal constraints we wish to impose on the behavior of AI agents,227 LFAIs will be designed to comply automatically.
B. Theoretical Motivations
1. Law Following in Principal-Agent Relationships
As discussed above,228 AI agents can be fruitfully analyzed through principal-agent principles. Without advocating for the wholesale legal application of agency law to AI agents, reference to agency law principles can help illuminate the significance and potential of LFAI.229
Under hornbook agency principles, an AI agent should generally “act loyally for the principal’s benefit in all matters connected with the agency relationship.”230 This generally includes a duty to obey instructions from the principal.231
Crucially, however, this general duty of obedience is qualified by a higher-order duty to follow the law. Agents only have a duty to obey lawful instructions.232 Thus, “[a]n agent has no duty to comply with instructions that may subject the agent to criminal, civil, or administrative sanctions or that exceed the legal limits on the principal’s right to direct action taken by the agent.”233 “[A] contract provision in which an agent promises to perform an unlawful act is unenforceable.”234 An agent cannot escape personal liability for unlawful acts ordered by their principal.235
The basic assumption that underlies these various doctrines is that an agent lacks any independent power to perform unlawful acts.236 Agency law therefore developed under the assumption that agents maintain an independent obligation to follow the law and, thus, remain accountable for their violations of law. This assumption shaped agency law to prevent principals from unjustly benefiting by externalizing harms incident to the agency relationship.237 This feature of agency law helps establish a baseline to which we can compare the world of AI agents in the absence of law-following constraints; it also provides a normative justification for requiring AI agents to prioritize legal compliance over obedience to their principals.
2. Law Following in the Design of Artificial Legal Actors
AI agents will of course not be the first artificial actor that humanity has created. Two types of powerful artificial actors—corporations and governments238—profoundly impact our lives. When deciding how the law should respond to AI agents, it may make sense to draw lessons from the law’s response to the invention of other artificial legal actors.
A key lesson for AI agents is this: for both corporations and governments, the law does not rely solely on ex post liability to steer the actor’s behavior; it requires the actor to be law following by design, at least to some extent. A disposition toward compliance is built into the very “architecture” of these artificial actors. AI agents may become no less important than corporations and governments in the aggregate, not least because they will be thoroughly integrated into them. Just as the law requires these other actors to be law following by design, it should require AI agents to be LFAIs.
a. Corporations as Law Following by Design
The law requires corporations to be law following by design. One way it does this is by regulating the very legal instruments that bring corporations into existence: corporate charters are only granted for lawful purposes.239 While an “extreme” remedy,240 courts can order corporations to be dissolved if they repeatedly engage in illegal conduct.241 Failure to comply with legally required corporate formalities can also be grounds for involuntarily dissolving a corporate entity242 or piercing the corporate veil.243 Thus, while corporations are, like all legal persons, generally obligated to obey the law, states do not only rely on external sanctions to persuade them to do so: they also force corporations to be law following through architectural measures, including dissolving244 corporations that break the law or refusing to incorporate those that would.
The law also forces corporations to be law following by regulating the human agents that act on their behalf through the agents’ fiduciary duties. Directors who intentionally cause a corporation to violate positive law breach their duty of good faith.245 Not only are corporate fiduciaries required to follow the law themselves, they are required to monitor for violations of law by other corporate agents.246 Moreover, human agents that violate certain laws can be disqualified from serving as corporate agents.247 These sort of “structural” duties and remedies248 are thus aimed at causing the corporation to follow the law generally and pervasively, rather than merely penalizing violations as they occur.249 That is entirely sensible, since the state has an obvious interest in preventing the creation of new artificial entities that then go on to disregard its laws, especially since it cannot easily monitor many corporate activities. Whether a powerful and potentially difficult-to-monitor AI agent is generally disposed toward lawfulness will be similarly important. Accordingly, there is a parallel case for requiring the principals of AI agents to demonstrate that their agents will be law following.250
b. Governments as Law Following by Design
“Constitutionalism is the idea . . . that government can and should be legally limited in its powers, and that its authority or legitimacy depends on its observing these limitations.”251 Although we sometimes rely on ex post liability to deter harmful behavior by government actors,252 the design of the government—through the U.S. Constitution,253 statutory provisions, and longstanding practice—is the primary safeguard against lawless government action.
Examples abound. The general American constitutional design of separated powers, supported by interbranch checks and balances, plays an important role in preventing the government from exercising arbitrary power, thereby confining the government to its constitutionally delimited role.254 This system of multiple independent veto points yields concrete protections for personal liberty, such as making it difficult for the government to lawlessly imprison people.255
Governments, like corporations, act only through their human agents.256 As in the corporate case, governmental design forces the branches of government to follow the law in part by imposing law-following duties on the agents through whom it acts. The Constitution imposes a duty on the president to “take Care that the Laws be faithfully executed.”257 Similar to the corporate context discussed above, soldiers have a duty to disobey some unlawful orders, even from the commander in chief.258 Civil servants also have a right to refuse to follow unlawful orders, though the exact nature and extent of this right is unclear.259
We saw above that, in the corporate case, the law uses disqualification of law-breaking agents to ensure that corporations are law following.260 The law also uses disqualification to ensure that the government acts only through law-following agents, ranging from the highest levels of government to lower-level bureaucrats and employees. The Constitution empowers Congress to remove and disqualify officers of the United States for “high Crimes and Misdemeanors” through the impeachment process.261 Each house of Congress may expel its own members for “disorderly Behaviour.”262 Historically, Congress has exercised this power in cases “involv[ing] either disloyalty to the United States Government, or the violation of a criminal law involving the abuse of one’s official position, such as bribery.”263 Although there is no blanket rule disqualifying persons with criminal records from federal government jobs,264 numerous laws disqualify convicted individuals in more specific circumstances.265 Convicted felons are also generally ineligible to be employed by the Federal Bureau of Investigation266 or armed forces267 and usually cannot obtain a security clearance.268
These design choices encode a commonsense judgment that those who cannot be trusted to follow the law should not be entrusted to wield the extraordinary power that accompanies certain government jobs, especially positions associated with law enforcement, the military, and the intelligence community. If AI agents come to wield similar power and influence, the case for designing them to obey the law is equally compelling.
3. The Holmesian Bad Man and the Internal Point of View
Our distinction between AI henchmen and LFAIs mirrors a distinction in jurisprudence about possible attitudes toward legal obligations.269 An AI henchman treats legal obligations much as the “bad man” does in Justice Oliver Wendell Holmes Jr.’s classic The Path of the Law:
If you want to know the law and nothing else you must look at it as a bad man, who cares only for the material consequences which such knowledge enables him to predict, not as a good one, who finds his reasons for conduct, whether inside the law or outside of it, in the vaguer sanctions of conscience.270
That is, under some interpretations,271 Justice Holmes’ bad man treats the law merely as a set of incentives within which he pursues his own self-interest.272 Like the bad man, the primary reason an AI henchman would have to follow the law is simply that, if it is caught breaking the law, its principal might face negative consequences that outweigh the benefits gained from the lawbreaking.273 Like the bad man,274 therefore, if the AI henchman predicts that the expected costs of violating the law are greater than the expected benefits, it will obey. Otherwise, it will not.
Fortunately, the bad man is not the only possible model for AI agents’ attitudes toward the law. One alternative to the bad man view of the law is Professor H.L.A. Hart’s “internal point of view.”275 “The internal point of view is the practical attitude of rule acceptance—it does not imply that people who accept the rules accept their moral legitimacy, only that they are disposed to guide and evaluate conduct in accordance with the rules.”276 Whether AIs can have the capacity to truly adopt the internal point of view is, of course, contested.277 But regardless of their mental state (if any), AI agents can be designed to act like someone who thinks that “the law is not simply sanction-threatening, -directing, or -predicting, but rather obligation-imposing”278 and is thus disposed to “act[] according to the dictates of the [law].”279 An AI agent can be designed to be more rigorously law-following than the bad man.280
Real life is of course filled with people who are “bad” or highly imperfect. But bad AI agents are not similarly inevitable. AI agents are human-designed artifacts. It is open to us to design their behavioral dispositions to suit our policy goals, and to refuse to deploy agents that do not meet those goals.
C. Concrete Benefits
1. Law-Following AI Prevents Abuses of Government Power
As we have discussed,281 the law makes the government follow the law (and thus prevents abuses of government power) in part by compelling government agents to follow the law. If the government comes to rely heavily on AI agents for cognitive labor, then the law should also require those agents to follow the law.
Depending on their assigned “roles,” government AI agents could wield significant power. They may have authority to initiate legal processes against individuals (including subpoenas, warrants, indictments, and civil actions), access sensitive governmental information (including tax records and intelligence), hack into protected computer systems, determine eligibility for government benefits, operate remote-controlled vehicles like military drones,282 and even issue commands to human soldiers or law enforcement officials.
These powers present significant opportunities for abuse, which is why preventing lawless government action was a motivation for the American Revolution,283 a primary goal of the Constitution, and a foundational American political value. We must therefore carefully examine whether existing safeguards designed to constrain human government agents would effectively limit AI agents in the absence of the law-following design constraints. While our analysis here is necessarily incomplete, we think it provides some reason for doubting the adequacy of existing safeguards in the world of AI agents.
When a human government agent, acting in their official capacity, violates an individual’s rights, they can face a variety of ex post consequences. If the violation is criminal, they could face severe penalties.284 This “threat of criminal sanction for subordinates [i]s a very powerful check on executive branch officials.”285 The threat of civil suits seeking damages, such as through a § 1983286 or Bivens action,287 might also deter them, though various immunities and indemnities will often protect them,288 especially if they are a federal officer.289
These checks will not exist in the case of AI henchmen. In the absence of law-following constraints, an AI henchman’s primary reason to obey the law will be its desire to keep its principal out of trouble.290 The henchman will thus lack one of the most powerful constraints on lawless behavior in humans: fear of personal ex post liability.
Most of us would rightfully be terrified of a government staffed by agents whose only concern was whether their bosses would suffer negative consequences as a result of their actions—a government staffed by Holmesian bad men loyal only to their principals.291 A basic premise of American constitutionalism,292 and rule-of-law principles more generally,293 is that government officials act legitimately only when they act pursuant to powers the people granted to them through law and obey the constraints attached to those powers. Treating law as a mere incentive system is repugnant to the proper role of government agents:294 being a “servant” of the people295 “faithfully discharg[ing] the duties of [one’s] office.”296
This is not just a matter of high-minded political and constitutional theory. An elected head of state aspiring to become a dictator would need the cooperation of the sources of hard power in society—military, police, other security forces, and government bureaucracy—to seize power. At present, however, these organs of government are staffed by individuals, who may choose not to go along with the aspiring dictator’s plot.297 Furthermore, in an economy dependent on diffuse economic activity, resistance by individual workers could reduce the economic upsides of a coup.298 This reliance on a diverse and imperfectly loyal human workforce, both within and outside of government, is a significant safeguard against tyranny.299 However, replacement of human workers with loyal AI henchmen would seriously weaken this safeguard, possibly easing the aspiring tyrant’s path to power.300
Nor is the importance of LFAIs limited to AI agents acting directly at the request of high-level officials. It extends to the vast array of lower-level state and federal officials who wield enormous power over ordinary citizens, including particularly powerless ones. Take prisons, which “can often seem like lawless spaces, sites of astonishing brutality where legal rules are irrelevant.”301 Prison law arguably constrains abuse by officials far less than it should. Nevertheless, “prisons are intensely legal institutions,” and “people inside prisons have repeatedly emphasized that legal rules have significant, concrete effects on their lives.”302 Even imperfect enforcement of the legal constraints on prison officials can have demonstrable effects.303 However bad the existing situation may be, diluting or gutting the efficacy of these constraints threatens to make the situation dramatically worse.
The substitution of AI agents for (certain) prison officials could have precisely this effect. Here is just one example. The Eighth Amendment forbids prison officials from withholding medical treatment from prisoners in a manner that is deliberately indifferent to their serious medical needs.304 Suppose that a state prisoner needs to take a dose of medicine each day for a month or their eyesight will be permanently damaged. The prisoner says something disrespectful to a guard. The warden, hoping to make an example of the prisoner, fabricates a note from the prison physician directing the prison pharmacist to withhold further doses of the medicine. The prisoner is subsequently denied the medicine. They try to reach their lawyer to get a temporary restraining order, but the lawyer cannot return their call until the next day. As a result, the prisoner’s eyesight is permanently damaged.
Let us assume that the state has strong state-level sovereign immunity under its own laws, meaning that the prisoner cannot sue the state directly.305 Under the status quo, the prisoner can still sue the warden for damages under 42 U.S.C. § 1983 for violating their clearly established constitutional right.306 Given the widespread prevalence of official indemnification agreements at the state level,307 the state will likely indemnify the warden, even though the state itself cannot be sued for damages under § 1983308 or its own laws. The prisoner is therefore likely to receive monetary damages.
But now replace the human warden with an AI agent charged with administering the prison by issuing orders directly to prison personnel through some digital interface. If this “AI warden” did the same thing, the prisoner would not have direct redress against it, since it is not a “person” under § 1983309 (or, indeed, any law). Nor will the prisoner have indirect recourse against the state by way of an indemnification agreement because there is no underlying tort liability for the state to indemnify. Nor will the prisoner have redress against the medical personnel, since the AI warden deceived them into withholding treatment.310 And, as we have already assumed, the state itself has sovereign immunity. Thus, the prisoner will find themself without any avenue of redress for the wrong they have suffered, and the introduction of an artificial agent in the place of a human official made all the difference.
What is the right response to these problems? Many responses may be called for, but one of them is to ensure that only law-following AI agents can serve in such a role. As previously discussed, the law disqualifies certain lawbreakers from many government jobs. Similarly, we believe, the law should disqualify AI agents that are not demonstrably rigorously law following from certain government roles. We discuss how this disqualification might be enforced, more concretely, in Part V.
There is, however, another possible response to these challenges: perhaps we should “just say no” and prohibit governments from using AI agents at all or at least severely curtail their use.311 We do not take a strong position on when this would be the correct approach, all things considered. At a minimum, however, we note a few reasons for skepticism of such a restrictive approach.
The first is banal: if AI agents can perform computer-based tasks well, then their adoption by the government could also deliver considerable benefits to citizens.312 Reducing the efficiency of government administration for the sake of preventing tyranny and abuse may be worthwhile in some cases and is indeed the logic of the individual rights protections of the Constitution.313 But tailoring a safeguard to allow for efficient government administration, is, all else being equal, preferable to a blunter, more restrictive safeguard. LFAI may offer such a tailored safeguard.
The second reason for skepticism is that adoption of AI agents by governments may become more important as AI technology advances. Some of the most promising AI safety proposals involve using trusted AI systems to monitor untrusted ones.314 The central reason is this: as AI systems become more capable, unassisted humans will not be able to reliably evaluate whether the AIs’ actions are desirable.315 Assistance from trusted AI systems could thus be the primary way to scale humans’ ability to oversee untrusted AI systems. If the government is to oversee the behavior of new and untrusted private-sector AI systems, it might be necessary to do so using AI agents.
Even if the government does not need to rely on AI agents to administer AI safety regulation (for example, because such AI overseers are employed by private companies, not the government), the government will likely need to employ AI agents to help it keep up with competitive pressures. Should the federal government hesitate to adopt AI agents to increase its efficiency, foreign competitors might show no such qualms. As a result, the federal government might then feel little choice but to likewise adopt AI agents.
In the face of these competing demands, LFAI offers a plausible path to enable the adoption of AI agents in governmental domains with a high potential for abuse (e.g., the military, intelligence, law enforcement, prison administration) while safeguarding life, liberty, and the rule of law. LFAI can also transform the binary question of whether to adopt AI agents into the more multidimensional question of which laws should constrain them.316 This should allow for more nuanced policymaking, grounded in the existing legal duties of government agents.
2. Law-Following AI Enables Scalable Enforcement of Public Law
AI agents could cause a wide variety of harms. The state promulgates and enforces public law prohibitions—both civil and criminal—to prevent and remedy many of these harms. If the state cannot safely assume that AI agents will reliably follow these prohibitions, the state might need to increase the resources dedicated to law enforcement.
LFAI offers a way out of this bind. Insofar as AI agents are reliably law following, the state can trust that significantly less law enforcement is needed.317 This dynamic would also have broader beneficial implications for the structure and functioning of government. “If men were angels, no government would be necessary.”318 LFAIs would not be angels,319 but they would be a bit more angelic than many humans. Thus, as a corollary of Publius’s insight, we may need less government to oversee LFAIs’ behavior than we would need for a human population of equivalent size. State resources that would otherwise be spent on investigating and enforcing the laws against AI agents could instead be directed to other problems or refunded to the citizenry.
LFAI would also curtail some of the undesirable side effects and opportunities for abuse inherent in law enforcement. Law enforcement efforts often involve some intrusion into the private affairs and personal freedoms of citizens.320 If the government could be more confident that AI agents under private control were behaving lawfully, it would have less cause to surveil or investigate their behavior, thereby imposing fewer321 burdens on their principals’ privacy. Reducing the occasion for investigations and searches would also create fewer opportunities for abuse of private information.322 In this way, ensuring reliably law-following AI might significantly mitigate the frequency and severity of law enforcement’s intrusions on citizens’ privacy and liberty.
IV. Law-Following AI as AI Alignment
The field of AI alignment aims to ensure that powerful, general-purpose AI agents behave in accordance with some set of normative constraints.323 AI systems that do not behave in accordance with such constraints are said to be “misaligned” or “unaligned.” Since the law is a set of normative constraints, the field of AI alignment is highly relevant to LFAI.324
The most basic set of normative constraints to which an AI could be aligned is the “informally specified”325 intent of its principal.326 This is called “intent-alignment.”327 Since individuals’ intentions are a mix of morally good and bad to varying degrees, some alignment work also aims to ensure that AI systems behave in accordance with moral constraints, regardless of the intentions of the principal.328 This is called “value-alignment.”329
AI alignment work is valuable because, as shown by theoretical arguments330 and empirical observations,331 it is difficult to design AI systems that reliably obey any particular set of constraints provided by humans.332 In other words, nobody knows how to ensure that AI systems are either intent-aligned or value-aligned,333 especially for smarter-than-human systems.334 This is the “Alignment Problem.”335 The Alignment Problem is especially worrying for AI systems that are agentic and goal-directed,336 as such systems may try to evade human oversight and controls that could frustrate pursuit of those goals, such as by deceiving their developers,337 accumulating power and resources338 (including by making themselves smarter),339 and ultimately resisting efforts to correct their behavior or halt further actions.340
There is a sizable literature arguing that these dynamics imply that misaligned AI agents pose a nontrivial risk to the continued survival of humanity.341 The case for LFAI, however, in no way depends on the correctness of these concerns: the specter of widespread lawless AI action should be sufficient on its own to motivate LFAI. Nevertheless, the alignment literature produces several valuable insights for the pursuit of LFAI.
A. AI Agents Will Not Follow the Law by Default
The alignment literature suggests that there is a significant risk that AI agents will not be law following by default. This is a straightforward implication of the Alignment Problem. To see how, imagine a morally upright principal who intends for his AI agent to rigorously follow the law. If the AI agent was intent-aligned, the agent would therefore follow the law. But the fact that intent-alignment is an unsolved problem implies that there is a significant chance that that agent would not be aligned with the principal’s intentions and would therefore violate the law. Put differently, unaligned AIs may not be controllable,342 and uncontrollable AIs may break the law. Thus, so long as intent-alignment remains an unsolved technical problem, there will be a significant risk that AI agents will be prone to lawbreaking behavior.
To be clear, the main reason that there is a significant risk AI agents will not be law following by default is not that people will not try to align AI agents to the law (although that is also a risk).343 Rather, the main risk is that current state-of-the-art alignment techniques do not provide a strong guarantee that advanced AI agents will be aligned, even when they are trained with those techniques. There is a clear empirical basis for this claim, which is that those alignment techniques frequently fail in current frontier models.344 There are also theoretical limitations to existing techniques for smarter-than-human systems.345
A related implication of the alignment literature is that even intent-aligned AI agents may not follow the law by default. Again, we can see this by hypothesizing an intent-aligned AI agent and a human principal who wants the AI agent to act as their henchman. Since an intent-aligned AI agent follows the intent of its principal, this intent-aligned agent would act as a henchman and thus act lawlessly when doing so serves the principal’s interests.346 In typical alignment language, intent-alignment still leaves open the possibility that principals will misuse their intent-aligned AI.347
None of this is to imply that intent-alignment is undesirable. Solving intent-alignment is the primary focus of the alignment research community348 because it would ensure that AI agents remain controllable by human principals.349 Intent-alignment is also generally assumed to be easier than value-alignment.350 And if principals want their AI agents to follow the law, or behave ethically more generally, then intent-alignment will produce law-following or ethical behavior. But in a world where principals range from angels to devils, alignment researchers acknowledge that intent-alignment alone is insufficient to guarantee that AI agents act lawfully or produce good effects in the world.351 This brings us to the next important set of implications from the alignment literature: law-alignment.
B. Law-Alignment Is More Legitimate Than Value-Alignment
LFAIs are generally intent-aligned—they are still loyal to their principals—but are also subject to a side constraint that they will follow the law while advancing the interests of their principals. Extending the typical alignment terminology, we can call this side constraint “law-alignment.”352
But the law is not the only side constraint that can be imposed on intent-aligned AIs. As alluded to above, another possible model is value-alignment. Value-aligned AI agents act in accordance with the wishes of their principals but are subject to ethical constraints, usually imposed by the model developer.
However, value-alignment can be controversial when it causes AI models to override the lawful requests of users. Perhaps the most well-known example of this is the controversy around Google’s Gemini image-generation AI in early 2024. In an attempt to increase the diversity in outputted pictures,353 Gemini ended up failing in clear ways, such as portraying “1943 German soldiers” as racially diverse or refusing to generate pictures of a “white couple” while doing so for couples of other races.354
This incident led to widespread concern that the values exhibited by generative AI products were biased toward the predominantly liberal views of these companies’ employees.355 This concern has been vindicated by empirical research consistently finding that the espoused political views of these AIs indeed most closely resemble those of the center left.356 Critics from further left have also frequently raised similar concerns about demographic and ideological biases in AI systems.357
Some critics concluded from the Gemini incident that alignment work writ large has become a Trojan horse for covertly pushing the future of AI in a leftward direction.358 Those who disagree with progressive political values will naturally find this concerning, given the importance that AI might have in the future of human communication359 and the highly centralized nature of large-scale AI development and deployment.360
In a pluralistic society, it is inevitable and understandable that competing factions will be critical when a sociotechnical system reflects the values of only one faction. But alignment, as such, is not the right target of such criticisms. Intent-alignment is value-neutral, concerning itself only with the extent to which an AI agent obeys its principal.361 Reassuringly for those concerned with ideological bias in AI systems, intent-alignment is also the primary focus of the alignment community, since solving intent-alignment is necessary to reliably control AI systems at all.362 A large majority of Americans from all political backgrounds agree that AI technologies need oversight.363 And overseeing unaligned systems is much more difficult than overseeing aligned ones. Indeed, even the critics of alignment work tend to assume—contrary to the views of many alignment researchers—that AI agents will be easy to control364 and presumably view this result as desirable.
Furthermore, some amount of alignment is also necessary to make useful AI products and services. Consumers, reasonably, want to use AI technologies that they can reliably control. Today’s leading chatbots—like Claude and ChatGPT—are only helpful to users due to the application of alignment techniques like RLHF365 and Constitutional AI.366 AI developers also use alignment techniques to instill uncontroversial (and user-friendly) behaviors, such as honesty, into their AI systems.367 AI companies are also already using alignment techniques to prevent their AI systems from taking actions that could cause them or their customers to incur unnecessary legal liability.368 In short, some degree of alignment work is necessary to make AI products useful in the first place.369 To adopt a blanket stance against alignment because of the Gemini incident is not only unjustified370 but also likely to undermine American leadership in AI.
Nevertheless, it is reasonable for critics to worry about and contest the frameworks by which potentially controversial values are instilled into AI systems. AI developers are indeed a “very narrow slice of the global population.”371 This is something that should give anyone, regardless of political persuasion, pause.372 But intent-alignment is not enough, either: it is inadequate to prevent a wide variety of harms that the state has an interest in preventing.373 So, we need a form of alignment that is more normatively constraining than intent-alignment alone—but more legitimate than alignment to values that AI developers choose themselves.
Law-alignment fits these criteria.374 While the moral legitimacy of the law is not perfect, in a republic it nevertheless has the greatest legitimacy of any single source or repository of values.375 Indeed, “the framers [of the U.S. Constitution] insisted on a legislature composed of different bodies subject to different electorates as a means of ensuring that any new law would have to secure the approval of a supermajority of the people’s representatives,”376 thus ensuring that new laws are “the product of widespread social consensus.”377 In our constitutional system of government, laws are also subject to checks and balances that protect fundamental rights and liberties, such as judicial review for constitutionality and interpretation by an independent judiciary.
Aligning to law also has procedural virtues over value-alignment. First, there is widespread agreement on the authoritative sources of law (e.g., the Constitution, statutes, regulations, case law), much more so than for ethics. Relatedly, legal rules tend to be expressed much more clearly than ethical maxims. Although there is considerable disagreement about the content of law and the proper forms of legal reasoning, it is nevertheless much easier (and less controversial) to evaluate the validity of legal propositions and arguments than to assess the quality or correctness of ethical reasoning.378 Moreover, when there is disagreement or ambiguity, the law contains established processes for authoritatively resolving disputes over the applicability and meaning of laws.379 Ethics contains no such system.
We therefore suggest that law-alignment, not value-alignment, should be the primary focus when something beyond intent-alignment is needed.380 Our claim, to be clear, is not that law-alignment alone will always prove satisfactory, that it should be the sole constraint on AI systems beyond intent-alignment, or that AI agents should not engage in moral reasoning of their own.381 Rather, we simply argue that more practical and theoretical alignment research should be aimed at building AI systems aligned to law.
V. Implementing and Enforcing Law-Following AI
We have argued that AI agents should be designed to follow the law. We now turn to the question of how public policy can support this goal. Our investigation here is necessarily preliminary; our aim is principally to spur future research.
A. Possible Duties Across the AI Agent Life Cycle
As an initial matter, we note that a duty to ensure that AI agents are law following could be imposed at several stages of the AI life cycle.382 The law might impose duties on persons who are
- developing AI agents,
- possessing383 AI agents,
- deploying384 AI agents, or
- using AI agents.
After deciding which of these activities ought to be regulated, policymakers must then decide what persons engaging in those activities are obligated to do. While the possibilities are too varied to exhaust here, some basic options might include commands like the following:
- “Any person developing an AI agent has a duty to take reasonable care to ensure that such AI agent is law following.”
- “It is a violation to knowingly possess an AI agent that is not law following, except under the following circumstances: . . .”
- “Any person who deploys an AI agent is strictly liable if such AI agent is not law following.”
- “A person who knowingly uses an AI agent that is not law following is liable.”
Basic duties of this sort would comprise the foundational building blocks of LFAI policy. Policymakers must then choose whether to enforce these obligations ex post (that is, after an AI henchman takes an illegal action)385 or ex ante. These two choices are interrelated: as we will explore below, it may make more sense to impose ex ante requirements for some activities and ex post liability for others. For example, ex ante regulation might make more sense for AI developers than civilian AI users because the former are far more concentrated and can absorb ex ante compliance costs more easily.386 And of course, ex ante regulation and ex post regulation are not mutually exclusive:387 driving, for example, is regulated by a combination of ex ante policies (e.g., licensing requirements) and ex post policies (e.g., tort liability).
B. Ex Post Policies
We begin our discussion with ex post policies. Many scholars believe that ex post policies are generally preferable to ex ante policies.388 While we think that ex post policies could have an important role to play in implementing LFAI, we also suspect that they will be inadequate in certain contexts.
Enforcing duties through ex post liability rules is familiar in both common law389 and regulation.390 In the LFAI context, ex post policies would impose liability on an actor after an AI henchman controlled by that actor violates an applicable legal duty. More and less aggressive ex post approaches are conceivable. On the less aggressive end of the spectrum, development, possession, deployment, or use of an AI henchman might be considered a per se breach of the tort duty of reasonable care, rendering the human actor liable for resulting injuries.391 To some extent, this may already be the case under existing tort law.392 The law might also consider extending an AI developer or deployer’s negligence liability to harms that would not typically be compensable under traditional tort principles (because, for example, they would count as pure economic loss)393 if those harms are produced by their AI agents acting in criminal or otherwise unlawful ways.394 A legislature might also impose tort liability on the developers of AI agents if those AI agents (1) are not law following, (2) violate an applicable legal duty, and (3) thereby cause harm.395
Other innovations may also be warranted. Several scholars have argued, for example, that the principal of an AI agent should sometimes be held strictly liable for the “torts” of that agent under a respondeat superior theory.396 In some cases, such as when a developer has recklessly failed to ensure that its AI agent is law following by design, punitive damages might be appropriate as well. Moving beyond tort law, in some cases it may make sense to impose civil sanctions397 when an AI henchman violates an applicable legal duty, even if no harm results.
In order to sufficiently disincentivize the deployment of lawless AI agents in high-stakes contexts, a legislature might also vary applicable immunity rules. For example, Congress could create a distinct cause of action against the federal government for individuals harmed by AI henchmen under the control of the federal government, taking care to remove barriers that various immunity rules pose to analogous suits against human agents.398
These and other imaginable ex post policies are important arrows in the regulatory quiver, and we suspect they will have an important role to play in advancing LFAI. Nevertheless, we would resist any suggestion that ex post sanctions are sufficient to deal with the specter of lawless AI agents.
Our reasons are multiple. In many contexts, detecting lawless behavior once an AI agent has been deployed will be difficult or costly—especially as these systems become more sophisticated and more capable of deceptive behavior.399 Proving causation may also be difficult.400 In the case of corporate actors, meanwhile, the efficacy of such sanctions may be seriously blunted by judgment-proofing and similar phenomena.401 And, most importantly for our purposes, various immunities and indemnities make tort suits against the government or its officials a weak incentive.402 These considerations suggest that it would be unwise to rely on ex post policies as our principal means for ensuring that AI agents follow the law when the risks from lawless action are particularly high.
C. Ex Ante Policies
Accordingly, we propose that, in some high-stakes contexts, the law should take a more proactive approach by preventing the deployment of AI henchmen ab initio. This would likely require establishing a technical means for evaluating whether an AI agent is sufficiently law following,403 then requiring that any agents be evaluated for compliance prior to deployment. Permission to deploy the agent would then be conditional on achieving some minimal score during that evaluation process.404
We are most enthusiastic about imposing such requirements prior to the deployment of AI agents in government roles where lawlessness would pose a substantial risk to life, liberty, and the rule of law. We have discussed several such contexts already,405 but the exact range of contexts is worth carefully considering.
Ex ante strategies could also be used in the private sector, of course. One often-discussed approach is an FDA-like approval regulation regime wherein private AI developers are required to prove, to the satisfaction of some regulator, that their AI agents are safe prior to their deployment.406 The pro tanto case for requiring private actors to demonstrate that their AI agents are disposed to follow some basic set of laws is clear: the state has an interest in ensuring that its most fundamental laws are obeyed. But in a world of increasingly sophisticated artificial agents, approval regulation could—if not properly designed and sufficiently tailored—also constitute a serious incursion on innovation407 and personal liberty.408 If AI agents will be as powerful as we suspect, strictly limiting their possession could create risks of its own.409
Accordingly, it is also worth considering ex ante regulations on private AI developers or deployers that stop short of full approval regulation. For example, the law could require the developers of AI agents to, at a minimum, disclose information410 about the law-following propensities of their systems, such as which laws (if any) their agents are instructed to follow411 and any evaluations of how reliably their agents follow those laws.412 Similarly, the law could require developers to formulate and assess risk-management frameworks that specify the precautionary measures they plan to undertake to ensure that any agent they develop and deploy is sufficiently law following.413
Overall, we are uncertain about what kinds of ex ante requirements are warranted, all things considered, in the case of private actors. To a large extent, the issue cannot be intelligently addressed without more specific proposals. Formulating such proposals is therefore an urgent task for the LFAI research agenda, even if it is not, in our view, as urgent as formulating concrete regulations for AI agents acting under color of law.
D. Other Strategies
The law does not police undesirable behavior solely by imposing sanctions. It also specifies mechanisms for nullifying the presumptive legal effect of actions that violate the law or are normatively objectionable. In private law, for example, a contract is voidable by a party if that party’s assent was “induced by either a fraudulent or a material misrepresentation by the other party upon which the [party wa]s justified in relying.”414 Nullification rules exist in public law, too. One obvious example is the ability of the judiciary to nullify laws that violate the federal Constitution.415 Or, to take another familiar example, courts applying the Administrative Procedure Act “hold unlawful and set aside” agency actions that are “arbitrary, capricious, an abuse of direction, or otherwise not in accordance with law.”416
Nullification rules may provide a promising legal strategy for policing behavior by AI agents that is unlawful or normatively objectionable. Thus, in private law, if an AI henchman induces a human counterparty to enter into a disadvantageous contract, the resulting contractual obligation could be voidable by the human. In public law, regulatory directives issued by (or substantially traceable to) AI henchmen could be “h[e]ld unlawful and set aside” as “not in accordance with law.”417
Such prophylactic nullification rules are one sort of indirect legal mechanism for enforcing the duty to deploy law-following AIs. Indirect technical mechanisms are well worth considering, too. For example, the government could deploy AI agents that refuse to coordinate or transact with other AI agents unless those counterparty agents are verifiably law following (for example, by virtue of having “agent IDs”418 that attest to a minimal standard of performance on law-following benchmarks).
Similarly, the government could enforce LFAI by regulating the hardware on which AI agents will typically operate. Frontier AI systems “run” on specialized AI chips,419 which are typically aggregated in large data centers.420 Collectively, these are referred to as “AI hardware” or simply “compute.”421 Compared to other inputs to AI development and deployment, AI hardware is particularly governable, given its detectability, excludability, quantifiability, and concentrated supply chain.422 Accordingly, a number of AI governance proposals advocate for imposing requirements on those making and operating AI hardware in order to regulate the behavior of the AI systems developed and deployed on that hardware.423
One class of such proposals is “‘on-chip mechanisms’: secure physical mechanisms built directly into chips or associated hardware that could provide a platform for adaptive governance” of AI systems developed or deployed on those chips.424 On-chip mechanisms can prevent chips from performing unauthorized computations. One example is iPhone hardware that “enable[s] Apple to exercise editorial control over which specific apps can be installed” on the phone.425 Analogously, perhaps we could design AI chips that would not support AI agents unless those agents are certified as law following by some private or governmental certifying body. This could then be combined with other strategies to enforce LFAI mandates: for example, Congress could require that the government only run AI agents on such chips.
Unsurprisingly, designing these sorts of enforcement strategies is as much a task for computer scientists as it is for lawyers. In the decades to come, we suspect that such interdisciplinary legal scholarship will become increasingly important.
VI. A Research Agenda for Law-Following AI
We have laid out the case for LFAI: the requirement that AI agents be designed to rigorously follow some set of laws. We hope that our readers find it compelling. However, our goal with this Article is not just to proffer a compelling idea. If we are correct about the impending risks of lawless AI agents, we may soon need to translate the ideas in this Article into concrete and viable policy proposals.
Given the profound changes that widespread deployment of AI agents will bring, we are under no illusions about our ability to design perfect public policy in advance. Rather, our goal is to enable the design of “minimally viable LFAI policy”:426 a policy or set of policies that will prevent some of the worst-case outcomes from lawless AI agents without completely paralyzing the ability of regulated actors to experiment with AI agents. This minimally viable LFAI policy will surely be flawed in many ways, but with many of the worst-case outcomes prevented, we will hopefully have time as a society to patch remaining issues through the normal judicial and legislative means.
To that end, in this part, we briefly identify some questions that would need to be answered to design minimally viable LFAI policies.
1. How should “AI agent” be defined?
Our definition of “full AI agent”—an AI system “that can do anything a human can do in front of a computer”427—is almost certainly too demanding for legal purposes, since an AI agent that can do most but not all computer-based tasks that a human can do would likely still raise most of the issues that LFAI is supposed to address. At the same time, because a wide range of existing AI systems can be regarded as somewhat agentic,428 a broad definition of “AI agent” could render relevant regulatory schemes substantially overinclusive. Different definitions are therefore necessary for legal purposes.429
2. Which laws should an LFAI be required to follow?
Obedience to some laws is much more important than obedience to other laws. It is much more important that AI agents refrain from murder and (if acting under color of law) follow the Constitution than that they refrain from jaywalking. Indeed, requiring LFAIs to obey literally every law may very well be overly burdensome.430 In addition, we will likely need new laws to regulate the behavior of AI agents over time.
3. When an applicable law has a mental state element, how can we adjudicate whether an AI agent violated that law?
We discuss this question above in Part II.B. It is related to the previous question, for there may be conceptual or administrative difficulties in applying certain kinds of mental state requirements to AI agents. For example, in certain contexts, it may be more difficult to determine whether an AI agent was “negligent” than to determine whether it had a relevant “intent.”
4. How should an LFAI decide whether a contemplated action is likely to violate the law?
An LFAI refrains from taking actions that it believes would violate one of the laws that it is required to follow. But of course, it is not always clear what the law requires. Furthermore, we need some way to tell whether an AI agent is making a good faith effort to follow a reasonable interpretation of the law rather than merely offering a defense or rationalization. How, then, should an LFAI reason about what its legal obligations are?
Perhaps it should rely on its own considered judgment, based on its first-order reasoning about the substance of applicable legal norms. But in certain circumstances, at least, an LFAI’s appraisal of the relevant materials might lead it to radically unorthodox legal conclusions—and a ready disposition to act on such conclusions might significantly threaten the stability of the legal order. In other cases, an LFAI might conclude that it is dealing with a case in which the law is not only “hard” to discern but genuinely indeterminate.431
A more intuitively appealing option might require an LFAI to act in accordance with its prediction of how a court would likely decide.432 This approach has the benefit of tying an LFAI’s legal decision-making to an existing human source of interpretative authority. Courts provide authoritative resolutions to legal disputes when the law is controversial or indeterminate. And in our legal culture, it is widely (if not universally) accepted that “[i]t is emphatically the province and duty of the judicial department to say what the law is,”433 such that judicial interpretations of the law are entitled to special solicitude by conscientious participants in legal practice, even when they are not bound by a court judgment.434
However, a predictive approach would have important practical limitations.435 Perhaps the most important is the existence of many legal rules that bind the executive branch but are nevertheless “unlikely ever to come before a court in justiciable form.”436 It would seem difficult for an LFAI to reason about such questions using the prediction theory of law.
Even for those questions that could be decided by a court, using the prediction theory of law raises other important questions. For example, what is the AI agent allowed to assume about its own ability to influence the adjudication of legal questions? We would not want it to be able to consider that it could bribe or intimidate judges or jurors, that it could illegally hide evidence from the court, that it could commit perjury, or that it could persuade the president to issue it a pardon.437 These may be means of swaying the outcome of a case, but they do not seem to bear on whether the conduct would actually be legal.
The issues here are difficult, but perhaps not insurmountable. After all, there are other contexts in which something like these issues arise. Consider federal courts sitting in diversity applying state substantive law. When state court decisions provide inconclusive evidence as to the correct answer under state law, federal courts will make an “Erie guess” about how the state’s highest court would rule on the issue.438 It would clearly be inappropriate for such courts to make an “Erie guess” for reasons like “Justice X in the State Supreme Court, who’s the swing justice, is easily bribed.”439 If an LFAI’s decision-making should sometimes involve “predicting” how an appropriate court would rule, its predictions should be similarly constrained.
5. In what contexts should the law require that AI agents be law following?
Should all principals be prohibited from employing non-law-following AI agents? Or should such prohibitions be limited to specific principals, such as government actors?440 Or, perhaps, should they be limited only to government actors performing particularly sensitive government functions?441 In the other direction, should it be illegal to even develop or possess AI henchmen? We discussed various options in Part V above.
6. How should a requirement that AI agents be law following be enforced?
We discussed various options in Part V. As noted there, we think that reliance on ex post enforcement alone would be unwise, at least in the case of AI agents performing particularly sensitive government functions.
7. How rigorously should an LFAI follow the law?
That is, when should an AI agent be capable of taking actions that it predicts may be unlawful? The answer is probably not “never,” at least with respect to some laws. We generally do not expect perfect compliance with every law,442 especially (but not only) because it can be difficult to predict how a law will apply to a given fact pattern. Furthermore, some amount of disobedience is likely necessary for the evolution of legal systems.443
8. Would requiring AI agents controlled by the executive branch to be law following impermissibly intrude on the president’s authority to interpret the law for the executive branch?
The president has the authority to promulgate interpretations of law that are binding on the executive branch (though that power is usually delegated to the attorney general and then further delegated to the U.S. Department of Justice’s Office of Legal Counsel).444 Would that authority be incompatible with a law requiring the executive branch to deploy LFAIs that would, in certain circumstances, refuse to follow an interpretation of the law promulgated by the president?
9. How can we design LFAIs and surrounding governance systems to avoid excessive concentration of power?
For example, imagine that a single district court judge could change the interpretation of law as against all LFAIs. As the stakes of AI-agent action rise, so will the pressure on the judiciary to wield its power to shape the behavior of LFAIs. Even if all judges continue to operate in good faith and are well-insulated from illegal or inappropriate attempts to bias their rulings, such a system would amplify any idiosyncratic legal philosophies of individual judges and may promote mistaken rulings, causing greater harm than a more decentralized system.
As an example of how such problems might be avoided, any disputes about the law governing LFAIs should be resolved in the first instance by a panel of district court judges randomly chosen from around the country. Congress has established a procedure for certain election law cases to be first heard by three-judge panels “in recognition of the fact that ‘such cases were ones of great public concern that require an unusual degree of public acceptance.’”445
10. How can we design LFAI requirements for governments that nevertheless enable rapid adaptation of AI agents in government?
Perhaps the most significant objection to our proposal that AI agents be demonstrably law following before their deployment in government is that such a requirement might hurt state capacity by unduly impeding the government’s ability to adopt AI in a sufficiently rapid fashion.446 We are optimistic that LFAI requirements can be designed to adequately address this concern; but that is, of course, work that remains to be done.
Conclusion
The American political tradition aspires to maintain a legal system that stands as an “impenetrable bulwark”447 against all threats—public and private, foreign and domestic—to our basic liberties. For all the inadequacies of the American legal order, ensuring that its basic protections endure and improve over the decades and centuries to come is among our most important collective responsibilities.
Our world of increasingly sophisticated AI agents requires us to reimagine how we discharge this responsibility. Humans will no longer be the sole entities capable of reasoning about and conforming to the law. Human and human entities are no longer, therefore, the sole appropriate target of legal commands. Indeed, at some point, AI agents may overtake humans in their capacity to reason about the law. They may also rival and overtake us in many other competencies, becoming an indispensable cognitive workforce. In the decades to come, our social and economic world may be bifurcated into parallel populations of AI agents collaborating, trading, and sometimes competing with human beings and one another.
The law must evolve to recognize this emerging reality. It must shed its operative assumption that humans are the only proper objects of legal commands. It must expect AI agents to obey the law—at least as rigorously as it expects humans to—and must expect humans to build AI agents that do so. If we do not transform our legal system to achieve these goals, we risk a political and social order in which our ultimate ruler is not the law,448 but the person with the largest army of AI henchmen under their control.