AI Will Automate Compliance. How Can AI Policy Capitalize?
This piece was originally published on Lawfare.
Disagreements about AI policy can seem intractable. For all of the novel policy questions that AI raises, there remains a familiar and fundamental (if contestable) question of how policymakers should balance innovation and risk mitigation. Proposals diverge sharply, ranging from, at one pole, pausing future AI development to, at the other, accelerating AI progress at virtually all costs.
Most proposals, of course, lie somewhere between, attempting to strike a reasonable balance between progress and regulation. And many policies are desirable or defensible from both perspectives. Yet, in many cases, the trade-off between innovation and risk reduction will persist. Even individuals with similar commitments to evidence-based, constitutionally sound regulations may find themselves on opposite sides of AI policy debates given the evolving and complex nature of AI development, diffusion, and adoption. Indeed, we, the authors, tend to locate ourselves on generally opposing sides of this debate, with one of us favoring significant regulatory interventions and the other preferring a more hands-off approach, at least for now.
However, the trade-off between innovation and regulation may not remain as stark as it currently seems. AI promises to enable the end-to-end automation of many tasks and reduce the costs of others. Compliance tasks will be no different. Professor Paul Ohm recognized as much in a recent essay. “If modest predictions of current and near-future capability come to pass,” he expects that “AI automation will drive the cost of regulatory compliance” to near zero. That’s because of the suitability of AI tools to regulatory compliance costs. AI systems are already competent at many forms of legal work, and compliance-related tasks tend to be “on the simpler, more rote, less creative end of the spectrum of types of tasks that lawyers perform.”
Delegation of such tasks to AI may even further the underlying goals of regulators. As it stands, many information-forcing regulations fall short of expectations because regulated entities commonly submit inaccurate or outdated data. Relatedly, many agencies lack the resources necessary to hold delinquent parties accountable. In the context of AI regulations, AI tools may aid both in the development of and compliance with several kinds of policies including but not limited to adoption and ongoing adherence to cybersecurity safeguards, adherence to alignment techniques, evaluation of AI models based on safety-relevant benchmarks, and completion of various transparency reports.
Automated compliance is the future. But it’s more difficult to say when it will arrive, or how quickly compliance costs are likely to fall in the interim. This means that, for now, difficult trade-offs in AI policy remain: in some cases, premature or overly burdensome regulation could stifle desirable forms of AI innovation. This would not only be a significant cost in itself, but would also postpone the arrival of compliance-automating AI systems, potentially trapping us in the current regulation–innovation trade-off. How, then, should policymakers respond?
We tackle this question in our new working paper, Automated Compliance and the Regulation of AI. We sketch the contours of automated compliance and conclude by noting several of its policy implications. Among these are some positive-sum interventions intended to enable policymakers to capitalize on the compliance-automating potential of AI systems while simultaneously reducing the risk of premature regulation.
Automatable Compliance—And Not
Before discussing policy, however, we should be clear about the contours and limits of (our version of) automatable compliance. We start from the premise that AI will initially excel most at computer-based tasks. Fortunately, many regulatory compliance tasks fall in this category, especially in AI policy. Professor Ohm notes, for example, that many of the EU AI Act’s requirements are essentially information processing tasks, such as compiling information about the design, intended purpose, and data governance of regulated AI systems; analyzing and summarizing AI training data; and providing users with instructions on how to use the system. Frontier AI systems already excel at these sorts of textual reasoning and generation tasks. Proposed AI safety regulations or best practices might also require or encourage the following:
- Automated red-teaming, in which an AI model attempts to discover how another AI system might malfunction.
- Cybersecurity measures to prevent unauthorized access to frontier model weights.
- Implementation of automatable AI alignment techniques, such as Constitutional AI.
- Automated evaluations of AI systems on safety-relevant benchmarks.
- Automated interpretability, in which an AI system explains how another other AI model makes decisions in human-comprehensible terms.
These, too, seem ripe for (at least partial) automation as AI progresses.
However, there are still plenty of computer-based compliance tasks that might resist significant automation. Human red-teaming, for example, is still a mainstay of AI safety best practices. Or regulation might simply impose a time-based requirement, such as waiting several months before distributing the weights of a frontier AI model. Advances in AI might not be able to significantly reduce the costs associated with these automation-resistant requirements.
Finally, it’s worth distinguishing between compliance costs—“the costs that are incurred by businesses . . . at whom regulation may be targeted in undertaking actions necessary to comply with the regulatory requirements”—and other costs that regulation might impose. While future AI systems might be able to automate away compliance costs, firms will still face opportunity costs if regulation requires them to reallocate resources away from their most productive use. While such costs will sometimes be justified by the benefits of regulation, these costs might also resist automation.
Notwithstanding these caveats, we do expect AI to eventually significantly reduce certain compliance costs. Indeed, a number of startups are already working to automate core compliance tasks, and compliance professionals already report significant benefits from AI. However, for now, compliance costs remain a persistent consideration in AI policy debates. Given this divergence between future expectations and present realities, how should policymakers respond? We now turn to this question.
Four Policy Implications of Automated Compliance
Automatability Triggers: Regulate Only When Compliance is Automatable
Recall the discursive trope with which we opened: even when parties agree that regulation will eventually be necessary, the question of when to regulate can remain a sticking point. The proregulatory side might be tempted to jump on the earliest opportunity to regulate, even if there is a significant risk of prematurity, if they assess the risks of belated regulation to be worse. The deregulatory side might respond that it’s better to maintain optionality for now. The proregulatory side, even if sympathetic to that argument, might nevertheless be reluctant to delay if they do not find the deregulatory side’s implicit promise to eventually regulate credible.
Currently, this impasse is largely fought through sheer factional politics that often force rival interests into supporting extreme policies: the proregulatory side attempts to regulate when it can, and the deregulatory side attempts to block them. Of course, factional politics is inherent to democracy. But a more constructive dynamic might also be possible. In our telling, both the proregulatory and deregulatory sides of the debate share some important common assumptions. They believe that AI progress will eventually unlock dramatic new capabilities, some of which will be risky and others of which will be beneficial. These common assumptions can be the basis for a productive trade. The trade goes like this: the proregulatory side agrees not to regulate yet, while the deregulatory side credibly commits to regulate once AI has progressed further.
How might the proregulatory side make such a credible commitment? Obviously, one way would enact legislation effective at a future date certain, possibly several years out. But picking the correct date would be difficult given the uncertainty of AI progress. The proregulatory side will worry that that date will end up being too late if AI progresses more quickly than predicted, and vice versa for the proregulatory side.
We propose another possible mechanism for triggering regulation: an automatability trigger. An automatability trigger would specify that AI safety regulation is effective only when AI progress has sufficiently reduced compliance costs associated with the regulation. Automatability triggers could take many forms, depending on the exact contents of the regulation that they affect. In our paper, we give the following example, designed to trigger a hypothetical regulation that would prevent the export of neural networks with certain risky capabilities:
The requirements of this Act will only come into effect [one month] after the date when the [Secretary of Commerce], in their reasonable discretion, determines that there exists an automated system that:
(a) can determine whether a neural network is covered by this Act;
(b) when determining whether a neural network is covered by this Act, has a false positive rate not exceeding [1%] and false negative rate not exceeding [1%];
(c) is generally available to all firms subject to this Act on fair, reasonable, and nondiscriminatory terms, with a price per model evaluation not exceeding [$10,000]; and,
(d) produces an easily interpretable summary of its analysis for additional human review.
Our example is certainly deficient in certain respects. For instance, there is nothing in that text forcing the Secretary of Commerce to make such a determination (though such provisions could be added), and a highly deregulatory administration could likely thereby delay the date of such a determination well beyond the legislators’ intent. But we think that more carefully crafted automatability triggers could bring several benefits.
Most importantly, properly designed automatability triggers could effectively manage the risks of regulating both too soon and too late. They manage the risk of regulating too soon because they delay regulation until AI has already advanced significantly: an AI that can cheaply automate compliance with a regulation is presumably quite advanced. They manage the risk of regulating too late for a similar reason: AI systems that are not yet advanced enough to automate compliance likely pose less risk than those that are, at least for risks correlated with general-purpose capabilities.
There’s also the benefit of ensuring that the regulation does not impose disproportionately high costs on any one actor, thereby preventing regulation from forming an unintentional moat for larger firms. Our model trigger, for example, specifies that the regulation is only effective when the compliance determination from a compliance-automating AI costs no more than $10,000. Critically, these triggers may also be crafted in a way that facilitates iterative policymaking grounded in empirical evidence as to the risks and benefits posed by AI. This last benefit distinguishes automatability triggers from monetary or compute thresholds that are less sensitive to the risk profile of the models in question.
Automated Compliance as Evidence of Compliance
An automatability trigger specifies that a regulation becomes effective only when there exists an AI system that is capable of automating compliance with that regulation sufficiently accurately and cheaply. If such a “compliance-automating AI” system exists, we might also decide to treat firms that properly implement such a compliance-automating AI more favorably than firms that don’t. For example, regulators might treat proper implementation of compliance-automating AI systems as rebuttable evidence of substantive compliance. Or such firms might be subject to less frequent or stringent inspections.
Accelerate to Regulate
AI progress is not unidimensional. We have identified compliance automation as a particularly attractive dimension of AI progress: it reduces the cost to achieve a fixed amount of regulatory risk reduction (or, equivalently, it increases the amount of regulatory risk reduction feasible with a fixed compliance budget), thereby loosening one of the most consequential constraints on good policymaking in this high-consequence domain.
It may therefore be desirable to adopt policies and projects that specifically accelerate the development of compliance-automating AI. Policymakers, philanthropists, and civic technologists may be able to accelerate automated compliance by, for example:
- Building curated data sets that would be useful for creating compliance-automating AI systems;
- Building proof-of-concept compliance-automating AI systems for existing regulatory regimes;
- Instituting monetary incentives, such as advance market commitments, for compliance-automating AI applications;
- Ensuring that firms working on automated compliance have early access to restricted AI technologies; and
- Preferentially developing and advocating for AI policy proposals that are likely to be more automatable.
Automated Governance Amplifies Automated Compliance
Our paper focuses primarily on how private firms will soon be able to use AI systems to automate compliance with regulatory requirements to which they are subject. However, this is only one side of the dynamic: governments will be able to automate many of their core bureaucratic, administrative, and regulatory functions as well. To be sure, automation of core government functions must be undertaken carefully; one of us has recently dedicated a lengthy article to the subject. But, as with many things, the need for caution here should not be a justification for inaction or indolence. Governmental adoption of AI is becoming increasingly indispensable to state capacity in the 21st Century. We are therefore also excited about the likely synergies between automated compliance and automated governance. As each side of the regulatory tango adopts AI, new possibilities for more efficient and rapid interaction will open. Scholarship has only begun to scratch the surface of what this could look like, and what benefits and risks it will entail.
Conclusion: A Positive-Sum Vision for AI Policy
Spirited debates about the optimal content, timing, and enforcement of AI regulation will persist for the foreseeable future. That is all to the good.
At the same time, new technologies are typically positive-sum, enabling the same tasks to be completed more efficiently than before. Those of us who favor some eventual AI regulation should internalize this dynamic into our own policy thinking by carefully considering how AI progress will enable new modes of regulation that simultaneously increase regulatory effectiveness and reduce costs to regulated parties. This methodological lens is already common in technical AI safety, where many of the most promising proposals assume that future, more capable AI systems will be indispensable in aligning and securing other AI systems. In many cases, AI policy should rest on a similar assumption: AI technologies will be indispensable in the regulatory formulation, administration, and compliance.
Hard questions still remain. There may be AI risks that emerge well before compliance-automating AI systems can reduce costs associated with regulation. In these cases, the familiar tension between innovation and regulation will persist to a significant extent. However, in other cases, we hope that it will be possible to design policies that ride the production possibilities frontier as AI pushes it outward, achieving greater risk reduction at declining cost.