Blog Post | 
August 2024

The limits of liability

Gabriel Weil

I’m probably as optimistic as anyone about the role that liability can play in AI governance. Indeed, as I’ll argue in a forthcoming article, I think it should be the centerpiece of our AI governance regime. But it’s important to recognize its limits.

First and foremost, liability alone is not an effective tool for solving public good problems. This means it is poorly positioned to address at least some challenges presented by advanced AI. Liability is principally a tool for addressing risk externalities generated by training and deploying advanced AI systems. That is, AI developers and their customers largely capture the benefits of increasing AI capabilities, but most of the risk is borne by third parties who have no choice in the matter. This is the primary market failure associated with AI risk, but it’s not the only one. There is also a public good problem with AI alignment and safety research. Like most information goods, advances in alignment and safety research are non-rival (you and I can both use the same idea, without leaving less for the other) and non-excludable (once you come up with an idea, it’s hard to use it without the secret getting out). Markets generally underprovide public goods, and AI safety research is no exception. Plausible policy interventions to address this problem include prizes and other forms of public subsidies. Private philanthropy can also continue to play an important role in supporting alignment and safety research. There may also be winner-take-all race dynamics that generate market distortions not fully captured by the risk externality and public goods problems.

Second, there are some plausible AI risk externalities that liability cannot realistically address, especially those involving structural harms or highly attenuated causal chains. For instance, if AI systems are used to spread misinformation or interfere with elections, this is unlikely to give rise to a liability claim. To the extent that AI raises novel issues in those domains, other policy ideas may be needed. Similarly, some ways of contributing to the risk of harm are too attenuated to trigger liability claims. For example, if the developer of a frontier or near-frontier model releases information about the model and its training data/process that enables lagging labs to move closer to the frontier, this could induce leading labs to move faster and exercise less caution. But it would not be appropriate or feasible to use liability tools to hold the first lab responsible for the downstream harms from this race dynamic. 

Liability also has trouble handling uninsurable risks— those that might cause harms so large that a compensatory damages award would not be practically enforceable — if warning shots are unlikely. In my recent paper laying out a tort liability framework for mitigating catastrophic AI risk, I argue that uninsurable risks more broadly can be addressed using liability by applying punitive damages in “near miss” cases of practically compensable harm that are associated with the uninsurable risk. But if some uninsurable risks are unlikely to produce warning shots, then this indirect liability mechanism would not work to mitigate them. And if the uninsurable risk is realized, the harm would be too large to make a compensatory damages judgment practically enforceable. That means AI developers and deployers would have inadequate incentives to mitigate those risks.

Like most forms of domestic AI regulation, unilateral imposition of a strong liability framework is also subject to regulatory arbitrage. If the liability framework is sufficiently binding, AI development may shift to jurisdictions that don’t impose strong liability policies or comparably onerous regulations. While foreign AI developers would still be subject to liability if they harm people in countries with strong liability regimes, it may prove difficult to enforce those judgments if the developer lacks substantial assets in the country where the injuries occur. One potential solution to this problem is international treaties establishing reciprocal enforcement of liability judgments reached by the other country’s courts.
Finally, liability is a weak tool for influencing the conduct of governmental actors. By default, many governments will be shielded from liability, and many legislative proposals will continue to exempt government entities. Even if governments waive sovereign immunity for AI harms they are responsible for, the prospect of liability is unlikely to sway the decisions of government officials, who are more responsive to political than economic incentives. This means liability is a weak tool in scenarios where the major AI labs get nationalized as the technology gets more powerful. But even if AI research and development remains largely in the private sector, the use of AI by government officials will be poorly constrained by liability. Ideas like law-following AI are likely to be needed to constrain governmental AI deployment.

Share
Cite

Gabriel Weil, The limits of liability (Inst. for Law & A.I., 2024), https://law-ai.org/the-limits-of-liability/.

The limits of liability
Gabriel Weil
The limits of liability
Gabriel Weil