Blog Post | 
July 2025

Future frontiers for research in law and AI

Cullen O'Keefe, Matthijs Maas, Janna Tay

LawAI’s Legal Frontiers team aims to incubate new law and policy proposals that are simultaneously:

  1. Anticipatory, in that they respond to a reasonable forecast of the legal and policy challenges that further advances in AI will produce
  2. Actionable, in that we can make progress within these workstreams even under significant uncertainty
  3. Accommodating to a wide variety of worldviews and technological trajectories, given the shared challenges that AI will create and the uncertainties we have about likely developments
  4. Ambitious, in that they both significantly reduce some of the largest risks from AI while also enabling society to reap its benefits

Currently, the Legal Frontiers team owns two workstreams:

  1. AI Agents and the Rule of Law
  2. International Regulatory Institutions

However, the general vision behind Legal Frontiers is to continuously spin out mature workstreams to free us to identify and incubate new ones. To that end, we recently updated our LawAI’s Workstreams and Research Directions document to list some “Future Frontiers” on which we might work in the future. 

However, we don’t want people to wait for us to start working on these questions: they are already ripe for scholarly attention. To that end, we have reproduced those Future Frontiers here.

Regulating Government-Developed Frontier AI

Today, governments primarily act as a consumer of frontier AI technologies. Frontier AI systems are primarily developed by private companies with little or no initial government involvement. Those companies may then tailor their general frontier AI offerings to meet the particular needs of governmental customers.1 However, the private sector is generally responsible for the primary development of frontier AI models and systems, with governmental steering entering, if at all, later in the commercialization lifecycle. 

However, as governments increasingly realize the significant strategic implications of frontier AI technologies, they may wish to become more directly involved in the development of frontier AI systems at earlier stages of the development cycle.2 This could range from frontier AI systems initially developed under government contract, to a fully governmental effort to develop next-generation frontier AI systems.3 Indeed, a 2024 report from the U.S.-China Economic and Security Review Commission called for Congress to “establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability.”4 

Existing proposals for the regulation of the development and deployment of frontier AI systems envision the imposition of such regulations on private businesses, under the implicit assumption that frontier AI development and deployment will remain private-led. If and when governments do take a larger role in the development of frontier AI systems, new regulatory paradigms will be needed. Such proposals need to identify and address unique challenges and opportunities that government-led AI development will pose, as compared to today’s private-led efforts.

Examples of possible questions in this workstream could include:

  1. How are safety and security risks in high-stakes governmental research projects (e.g., the Manhattan Project) usually regulated?
  2. How might the government steer development of frontier AI technologies if it wished to do so?
  3. What existing checks and balances would apply to a  government program to develop frontier AI technologies?
  4. How would ideal regulation of government-directed frontier AI development vary depending on the mechanism used for such direction (e.g., contract versus government-run development)?
  5. How might ideal regulation of government-directed frontier AI development vary depending on whether the development is led by military or civilian parts of the government?
  6. If necessary to procure key inputs for the program (e.g. compute), how could the US government collaborate with select allies on such programs?5

Accelerating Technologies that Defend against Risks from AI

It is likely infeasible6 and/or undesirable7 to fully prevent the wide proliferation of many high-risk AI systems. There is therefore increasing interest in developing technologies8 to defend against possible harms from diffuse AI systems, and remedy those harms where defensive measures fail.9 Collectively, we call these “defensive technologies.”10 

Many of the most valuable contributions to the development and deployment of defensive technologies will not come from legal scholars, but rather from some combination of entrepreneurship, technological development research and development, and funders. But legal change may also play a role in more directly accelerating the development and deployment of defensive technologies, such as by removing barriers to their adoption, which raise the costs of research or reduce its rewards.11

Examples of general questions that might be valuable to explore include:

  1. What are examples of existing policies that unnecessarily hinder research and development into defense-enhancing technologies, such as by (a) raising the costs of conducting that research, or (b) reducing the expected profits of deployment of defense-enhancing technologies?12 
  2. What are existing legal or policy barriers that inhibit effective diffusion of defensive technologies across society?13
  3. How can the law preferentially14 accelerate defensive technologies?  

Regulating Internal Deployment

Many existing AI policy proposals regulate AI systems at the point when they are first “deployed”: that is, made available for use by persons external to the developer. However, pre-deployment use of AI models by the developing company—“internal deployment”—may also pose substantial risks.15 However, most policy proposals aimed at reducing large-scale risks from AI primarily regulate AI at or after the point of external deployment. Policy proposals for regulating internal deployment would therefore be valuable.

Example questions in the workstream might include:

  1. What existing modes of regulation in other AI industries are most analogous to regulation of internal deployment?16
  2. How can the state identify which AI developers are appropriate targets for regulation of internal deployment?
  3. How can regulation of internal deployment simultaneously reduce risk and allow for appropriate exploration of model capabilities and risks?
  4. What are the constitutional (e.g., Fourth Amendment) limitations on regulation of internal deployment?
  5. How can regulation of internal deployment be designed to reduce risks of espionage and information leakage?

AI technologies performing legal tasks will likely surface loopholes or gaps in the law: that is, actions permitted by the law but which policymakers would likely prefer to be prohibited. There are several reasons to expect this:

  1. AI itself constitutes a significant technological change, and technological changes often surface loopholes or gaps in the law.17
  2. AI might accelerate technological change and economic growth,18 which will similarly often surface gaps or loopholes in the law.
  3. AI might be more efficient at finding gaps or loopholes in the law, and quickly exploiting them.

Given that lawmaking is a slow and deliberative process, actors can often exploit gaps or loopholes before policymakers can “patch” them. While this dynamic is not new, AI systems may be able to cause more harm or instability by finding or exploiting gaps and loopholes than humans have in the past, due to their greater speed of action, ability to coordinate, dangerous capabilities, and (possibly) lack of internal moral constraints.

This suggests that it may be very valuable for policymakers to “patch” legal gaps and loopholes by quickly enacting new laws. However, constitutional governance is often intentionally slow, deliberative, and decentralized, suggesting that it is unwise and sometimes illegal to accelerate lawmaking in certain ways.

This tension suggests that it would be valuable to research how new legislative and administrative procedures could quickly “patch” legal gaps and loopholes through new law while also complying with the letter and spirit of constitutional limitations on lawmaking.

Responsibly Advancing AI-Enabled Governance

Recent years have seen robust governmental interest in the use of AI technologies for administration and governance.19 As systems advance in capabilities, this may create significant risks of both misuse,20 as well as potential safety risks from the deployment of advanced systems in high-stakes governmental infrastructures.

A recent report21 identifies the dual imperative for governments to:

  1. Quickly adopt AI technology to enhance state capacity, but
  2. Take care when doing so.

The report lays out three types of interventions worth considering:

  1. “‘Win-win’ opportunities” that help with both adoption and safety;22
  2. “Risk-reducing interventions”; and
  3. “Adoption-accelerating interventions.”

Designing concrete policies in each of these categories is very valuable, especially policies in the first category, or policies in the second and third category that do not come at the expense of the other category.

As AI systems are able to complete more of the tasks typically associated with traditional legal functions—drafting legislation and regulation, adjudicating,23 litigating, drafting contracts, counseling clients, negotiating, investigating possible violations of law, generating legal research—it will be natural to consider whether and how these tasks should be automated. 

We can call AI systems performing such functions “AI lawyers.” If implemented well, AI lawyers could help with many of the challenges that AI could bring. AI lawyers could write new laws to regulate governmental development or use of frontier AI, monitor governmental uses of AI, and craft remedies for violations. AI lawyers could also identify gaps and loopholes in the law, accelerate negotiations between lawmakers, and draft legislative “patches” that reflect lawmakers’ consensus. 

However, entrusting ever more power to AI lawyers entails significant risks. If AI lawyers are not themselves law-following, they may abuse their governmental station to the detriment of citizens. If such systems are not intent-aligned,24 entrusting AI systems with significant governmental power may make it easier for those systems to erode humanity’s control over human affairs. Regardless of whether AI lawyers are aligned, delegating too many legal functions to AI lawyers may frustrate important rule-of-law values, such as democratic responsiveness, intelligibility, and predictability. Furthermore, there are likely certain legal functions that it is important for natural persons to perform, such as serving as a judge on the court of last resort.

Research into the following questions may help humanity navigate the promises and perils of AI lawyers:

  1. Which legal functions should never be automated?
  2. Which legal functions, if entrusted to an AI lawyer, would significantly threaten democratic and rule-of-law values?
  3. How can AI lawyers enhance human autonomy and rule-of-law values?
  4. How can AI lawyers enhance the ability of human governments to respond to challenges from AI?
  5. What substantive safety standards should AI lawyers have to satisfy before being deployed in the human legal system?
  6. Which new legal checks and balances should be introduced if AI lawyers accelerate the speed of legal processes? 

Related to the above, there is also a question of how we can accelerate potential technologies that would defend against general risks to the rule of law and/or democratic accountability. For instance, as lawyers, we may also be particularly well-placed to advance legal reforms that make it easier for citizens to leverage “AI lawyers” to help them defend against vexatious litigation and governmental oppression, or pursue meritorious claims.25 For example, existing laws regulating the practice of law may impose barriers on citizens’ ability to leverage AI for their legal needs.26 This suggests further questions, such as:

  1. Who will benefit by default from the widespread availability of cheap AI lawyers?
  2. Will laws regulating the practice of law form a significant barrier to defensive (and other beneficial) applications of AI lawyers?
  3. How should laws regulating the practice of law accommodate the possibility of AI lawyers, especially those that are “defensive” in some sense?
  4. How might access to cheap AI lawyers affect the volume of litigation and pursuit of claims? If there is a significant increase, would this result in a counterproductive effect by slowing down court processing times or prompting the judicial system to embrace technological shortcuts? 

Approval Regulation in a Decentralized World

After the release of GPT-4, a number of authors and policymakers proposed compute-indexed approval regulation, under which frontier AI systems trained with large amounts would be subjected to heightened predeployment scrutiny.27 Such regulation was perceived as attractive in large part because, under the scaling paradigm that produced GPT-4, development of frontier AI systems depended on the use of a small number of large data centers, which could (in theory) be easily monitored.

However, subsequent technological developments that reduce the amount of centralized compute needed to achieve frontier AI capabilities (namely improvements in decentralized training28 and the rise of reasoning models)29 have cast serious doubts on the long-term viability of compute-indexed approval regulation as a method for preventing unapproved development of highly capable AI models.30 

It is not clear, however, that these developments mean that other forms of approval regulation for frontier AI development and deployment would be totally ineffective. Many activities are subject to reasonably effective approval regulation notwithstanding their highly distributed nature. For example, people generally respect laws requiring a license to drive a car, hunt, or practice law, even though these activities are very difficult for the government to reliably prevent ex ante. Further research into approval regulation for more decentralized activities could therefore help illuminate whether approval regulation for frontier AI development could remain viable, at an acceptable cost to other values (e.g., privacy, liberty), notwithstanding these developments in the computational landscape.

Examples of possible questions in this workstream could include:

  1. How effective are existing approval regulation regimes for decentralized activities?
  2. Which decentralized activities most resemble frontier AI development under the current computing paradigm?
  3. How do governments create effective approval regulation regimes for decentralized activities, and how might those mechanisms be applied to decentralized frontier AI development?
  4. How can approval regulation of decentralized frontier AI development be implemented at acceptable costs to other values (e.g., privacy, liberty, administrative efficiency)?
Share
Future frontiers for research in law and AI
Cullen O'Keefe, Matthijs Maas, Janna Tay
Future frontiers for research in law and AI
Cullen O'Keefe, Matthijs Maas, Janna Tay