Why give AI agents actual legal duties?
The core proposition of Law-Following AI (LFAI) is that AI agents should be designed to refuse to take illegal actions in the service of their principals. However, as Ketan and I explain in our writeup of LFAI for Lawfare, this raises a significant legal problem:
[A]s the law stands, it is unclear how an AI could violate the law. The law, as it exists today, imposes duties on persons. AI agents are not persons, and we do not argue that they should be. So to say “AIs should follow the law” is, at present, a bit like saying “cows should follow the law” or “rocks should follow the law”: It’s an empty statement because there are at present no applicable laws for them to follow.
Let’s call this the Law-Grounding Problem for LFAI. LFAI requires defining AI actions as either legal or illegal. The problem arises because courts generally cannot reason about the legality of actions taken by an actor without some sort of legally recognized status, and AI systems currently lack any such status.1
In the LFAI article, we propose solving the Law-Grounding Problem by making AI agents “legal actors”: entities on which the law actually imposes legal duties, even if they have no legal rights. This is explained and defended more fully in Part II of the article. Let’s call this the Actual Approach to the Law-Grounding Problem.2 Under the Actual Approach, claims like “that AI violated the Sherman Act” are just as true within our legal system as claims like “Jane Doe violated the Sherman Act.”
There is, however, another possible approach that we did not address fully in the article: saying that an AI agent has violated the law if it took an action that, if taken by a human, would have violated the law.3 Let’s call this the Fictive Approach to the Law-Grounding Problem. Under the Fictive Approach, claims like “that AI violated the Sherman Act” would not be true in the same way that statements like “Jane Doe violated the Sherman Act.” Instead, statements like “that AI violated the Sherman Act” would be, at best, a convenient shorthand for statements like “that AI took an action that, if taken by a human, would have violated the Sherman Act.”
I will argue that the Actual Approach is preferable to the Fictive Approach in some cases.4 Before that, however, I will explain why someone might be attracted to the Fictive Approach in the first place.
Motivating the Fictive Approach
To say that something is fictive is not to say that it is useless; legal fictions are common and useful. The Fictive Approach to the Law-Grounding Problem has several attractive features.
The first is its ease of implementation: the Fictive Approach does not require any fundamental rethinking of legal ontology. We do not need to either grant AI agents legal personhood or create a new legal category for them.
The Fictive Approach might also track common language use: when people make statements like “Claude committed copyright infringement,” they probably mean it in the fictive sense.
Finally, the Fictive Approach also mirrors how we think about similar problems, like immunity doctrines. The King of England may be immune from prosecution, but we can nevertheless speak intelligibly of his actions as lawful or unlawful by analyzing what the legal consequences would be if he were not immune.
Why prefer the Actual Approach?
Nevertheless, I think there are good reasons to prefer the Actual Approach over the Fictive Approach.
Analogizing to Humans Might Be Difficult
The strongest reason, in my opinion, is that AI agents may “think” and “act” very differently from humans. The Fictive Approach requires us to take a string of actions that an AI did and ask whether a human who performed the same actions would have acted illegally. The problem is that AI agents can take actions that could be very hard for humans to take, and so judges and jurors might struggle to analyze the legal consequences of a human doing the same thing.
Today’s proto-agents are somewhat humanlike in that they receive instructions in natural language, use computer tools designed for humans, reason in natural language, and generally take actions serially at approximately human pace and scale. But we should not expect this paradigm to last. For example, AI agents might soon:
- Consume the equivalent of dozens of books per day with perfect recall
- Have memories that do not decay over time
- Create copies of themselves and delegate tasks to those copies
- Reason near-perfectly about what other copies of itself are thinking
- Interact simultaneously with hundreds of people
- Erase their own “memory”
- Allow other models to see their neural architecture or activations
- Use tools made specifically for use by AI agents (that cannot be used by humans)
- Communicate in artificial languages
- Reason in latent space
And these are just some of the most foreseeable; over time, AI agents will likely become increasingly alien in their modes of reasoning and action. If so, then the Fictive Approach will become increasingly strained: judges and jurors will find themselves trying to determine whether actions that no human could have taken would have violated the law if performed by a human. At a minimum, this would require unusually good analogical reasoning skills; more likely, the coherence of the reasoning task would break down entirely.
Developing Tailored Laws and Doctrines for AIs
LFAI is motivated in large part by the belief that AI agents that are aligned to “a broad suite of existing laws”5 would be much safer than AI agents unbound by existing laws. But new laws specifically governing the behavior of AI agents will likely be necessary as AI agents transform society.6 However, the Fictive Approach would not be effective for new AI-specific laws. Recall that the Fictive Approach says that an action by an AI agent violates a law just in the case that a human who took that action would have violated that law. But if the law in question would only apply to an AI agent, the Fictive Approach cannot be applied: a human could not violate the law in question.
Relatedly, we may wish to develop new AI-specific legal doctrines, even for laws that apply to both humans and AIs. For example, we might wish to develop new doctrines for applying existing laws with a mental state component to AI agents.7 Alternatively, we may need to develop doctrines for determining when multiple instances of the same (or similar) AI models should be treated as identical actors. But the Fictive Approach is in tension with the development of AI-specific doctrines, since the whole point of the Fictive Approach is precisely to avoid reasoning about AI systems in their own right.
These conceptual tensions may be surmountable. But as a practical matter, a legal ontology that enables courts and legislatures to actually reason about AI systems in their own right seems more likely to lead to nuanced doctrines and laws that are responsive to the actual nature of AI systems. The Fictive Approach, by contrast, encourages courts and legislatures to attempt to map AI actions onto human actions, which may thereby overlook or minimize the significant differences between humans and AI systems.
Grounding Respondeat Superior Liability
Some scholars propose using respondeat superior to impose liability on the human principals of AI agents for any “torts” committed by the latter.8 However, “[r]espondeat superior liability applies only when the employee has committed a tort. Accordingly, to apply respondeat superior to the principals of an AI agent, we need to be able to say that the behavior of the agent was tortious.”9 We can only say that the behavior of an AI agent was truly tortious if it had a legal duty to violate. The Actual Approach allows for this; the Fictive Approach does not.
Of course, another option is simply to use the Fictive Approach for the application of respondeat superior liability as well. However, the Actual Approach seems preferable insofar as it doesn’t require this additional change. More generally, precisely because the Actual Approach integrates AI systems into the legal system more fully, it can be leveraged to parsimoniously solve problems in areas of law beyond LFAI.
Optionality for Eventual Legal Personhood
In the LFAI article, we take no position as to whether AI agents should be given legal personhood: a bundle of duties and rights.10 However, there may be good reasons to grant AI agents some set of legal rights.11
Treating AI agents as legal actors under the Actual Approach creates optionality with respect to legal personhood: if the law recognizes an entity’s existence and imposes duties on it, it is easier for the law to subsequently grant that entity rights (and therefore personhood). But, we argue, the Actual Approach creates no obligation to do so:12 the law can coherently say that an entity has duties but no rights. Since it is unclear whether it is desirable to give AIs rights, this optionality is desirable.
* * *
AI companies13 and policymakers14 are already tempted to impose legal duties on AI systems. To make serious policy progress towards this, they will need to decide whether to actually do so, or merely use “lawbreaking AIs” as shorthand for some strained analogy to lawbreaking humans. Choosing the former path—the Actual Approach—is simpler and more adaptable, and therefore preferable.