AI Preemption and “Generally Applicable” Laws

Proposals for federal preemption of state AI laws, such as the moratorium that was removed from the most recent reconciliation bill in June 2025, often include an exception for “generally applicable” laws. Despite the frequency with which this phrase appears in legislative proposals and the important role it plays in the arguments of preemption advocates, however, there is very little agreement among experts as to what exactly “generally applicable” means in the context of AI preemption. Unfortunately, this means that, for any given preemption proposal, it’s often the case that very little can be said for certain about which laws will or will not be exempted.

The most we can say for sure is that the term “generally applicable” is supposed to describe a law that does not single out or target artificial intelligence specifically. Thus, a state law like California’s recently enacted “Transparency in Frontier Artificial Intelligence Act” (SB 53) would likely not be considered “generally applicable” by a court, because it imposes new requirements specifically on AI companies, rather than requirements that apply “generally” and affect AI companies only incidentally if at all. 

This basic definition, however, leaves a host of important questions unanswered. What about laws that don’t specifically mention AI, but nevertheless are clearly intended to address issues created by AI systems? Tennessee’s ELVIS Act, which was designed to protect musicians from unauthorized commercial use of their voices, is one example of such a law. It prohibits the reproduction of an artist’s voice by any technological means, but the law was obviously passed in 2024 because recent advances in AI capabilities have made it possible to reproduce celebrity voices more accurately than previously. Alternatively, what about laws which were not originally intended to apply to AI systems, but which happen to place a disproportionate burden on AI systems relative to other technologies? No one knows precisely how a court would resolve the question of whether such laws are “generally applicable,” and if you asked four different people who think about AI preemption for a living you might well get four different answers. If federal preemption legislation is eventually enacted, and if an exception for “generally applicable” laws is included, this question will likely be extensively litigated—and it’s likely that different courts will come to different conclusions.

Usually, the best way to get an idea of how a court will interpret a given phrase is to look at how courts have interpreted the same phrase in similar contexts in the past. However, while there is some existing case law discussing the meaning of “generally applicable” in the context of preemption, LawAI’s research hasn’t turned up any cases that shed a great deal of light on the question of what the term would mean in the specific context of AI preemption. It’s therefore likely that we won’t have a clear idea of what “generally applicable” really means until some years from now, when courts may (or may not) have had occasion to answer the question with respect to a variety of different arguably “generally applicable” state laws.

Last updated: December 11, 2025, at 4:19 p.m. Eastern Time

AI Federalism: The Right Way to Do Preemption

On November 20th, congressional Republicans launched a last-minute attempt to insert an artificial intelligence (AI) preemption provision into the must-pass National Defense Authorization Act (NDAA). As of this writing, the text of the proposed addition has not been made public. However, the fact that the provision is being introduced into a must-pass bill at the eleventh hour may indicate that the provision will resemble the preemption provision that was added to, and ultimately stripped out of, the most recent reconciliation bill. The U.S. House of Representatives passed an early version of that “moratorium” on state AI regulation in May. While the exact scope of the House version of the moratorium has been the subject of some debate, it would essentially have prohibited states and municipalities from enforcing virtually any law or rule regulating “artificial intelligence,” broadly defined. There followed a hectic and exciting back-and-forth political struggle over whether and in what form the moratorium would be enacted. Over the course of the dispute, the moratorium was rebranded as a “temporary pause,” amended to include various exceptions (notably including a carve-out for “generally applicable” laws), reduced from 10 years’ duration to five, and made conditional on states’ acceptance of new Broadband Equity Access and Deployment (BEAD) Program funding. Ultimately, however, the “temporary pause” was defeated, with the Senate voting 99-1 for an amendment stripping it from the reconciliation bill.

The preemption provision that failed in June would have virtually eliminated targeted state AI regulation and replaced it with nothing. Since then, an increasing number of politicians have rejected this approach. But, as the ongoing attempt to add preemption into the NDAA demonstrates, this does not mean that federal preemption of state AI regulations is gone for good. In fact, many Republicans and even one or two influential Democrats in Congress continue to argue that AI preemption is a federal legislative priority. What it does mean is that any moratorium introduced in the near future will likely have to be packaged with some kind of substantive federal AI policy in order to have any realistic chance of succeeding.

For those who have been hoping for years that the federal government would one day implement some meaningful AI policy, this presents an opportunity. If Republicans hope to pass a new moratorium through the normal legislative process, rather than as part of the next reconciliation bill, they will need to offer a deal that can win the approval of a number of Democratic senators (seven, currently, although that number may grow or shrink following the 2026 midterm elections) to overcome a filibuster. The most likely outcome is that nothing will come of this opportunity. An increasingly polarized political climate means that passing legislation is harder than it’s ever been before, and hammering out a deal that would be broadly acceptable to industry and the various other interest groups supporting and opposing preemption and AI regulation may not be feasible. Still, there’s a chance.

Efforts to include a moratorium in the NDAA seem unlikely to succeed. Even if this particular effort fails, however, preemption of state AI laws will likely continue to be a hot topic in AI governance for the foreseeable future. This means that arguably the most pressing AI policy question of the moment is: How should federal preemption of state AI laws and regulations work? In other words, what state laws should be preempted, and what kind of federal framework should they be replaced with?

I argue that the answer to that question is as follows: Regulatory authority over AI should be allocated between states and the federal government by means of an iterative process that takes place over the course of years and involves reactive preemption of fairly narrow categories of state law.

The evidence I’ll offer in support of this claim is primarily historical. As I argue below, this iterative back-and-forth process is the only way in which the allocation of regulatory authority over an important emerging technology has ever been determined in the United States. That’s not a historical accident; it’s a consequence of the fact that the approach described above is the only sensible approach that exists. The world is complicated, and predicting the future course of a technology’s development is notoriously difficult. So is predicting the kinds of governance measures that a given technology and its applications will require. Trying to determine how regulatory authority over a new technology should be allocated ex ante is like trying to decide how each room of an office building should be furnished before the blueprints have even been drawn up—it can be done, but the results will inevitably be disappointing.

The Reconciliation Moratorium Was Unprecedented

The reconciliation moratorium, if it had passed, would have been unprecedented with respect to its substance and its scope. The lack of substance—that is, the lack of any affirmative federal AI policy accompanying the preemption of state regulations—has been widely discussed elsewhere. It’s worth clarifying, however, that deregulatory preemption is not in and of itself an unprecedented or inherently bad idea. The Airline Deregulation Act of 1978, notably, preempted state laws relating to airlines’ “rates, routes, or services” and also significantly reduced federal regulation in the same areas. Congress determined that “maximum reliance on competitive market forces” would lead to increased efficiency and benefit consumers and, therefore, implemented federal deregulation while also prohibiting states from stepping in to fill the gap.

What distinguished the moratorium from the Airline Deregulation Act was its scope. The moratorium would have prohibited states from enforcing “any law or regulation … regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce” (with a few exceptions, including for “generally applicable” laws). But preemption of “any state law or regulation … regulating airplanes entered into interstate commerce” would have been totally out of the question in 1978. In fact, the vast majority of airplane-related state laws and regulations were unaffected by the Airline Deregulation Act. By the late 1970s, airplanes were a relatively well understood technology and air travel had been extensively regulated, both by the states and by the federal government, for decades. Many states devoted long sections of their statutory codes exclusively to aeronautics. The Airline Deregulation Act’s prohibition on state regulation of airline “rates, routes, or services” had no effect on existing state laws governing airlines’ liability for damage to luggageairport zoning regulationsthe privileges and duties of airport security personnelstate licensing requirements for pilots and for aircraft, or the legality of maneuvering an airplane on a public highway.

In short, the AI moratorium was completely unprecedented because it would have preempted an extremely broad category of state law and replaced it with nothing. In all the discussions I’ve had with die-hard AI preemption proponents (and there have been many), the only preemption measures I’ve encountered that have been anywhere near as broad as the reconciliation moratorium were packaged with an extensive and sophisticated scheme of federal regulation. The Federal Food, Drug, and Cosmetic Act, for example, prohibits states from establishing “any requirement [for medical devices, broadly defined] … which is different from … a requirement applicable under this chapter to the device.” But the breadth of that provision is proportional to the legendary intricacy of the federal regulatory regime of which it forms a part. The idea of a Food and Drug Administration-style licensing regime for frontier AI systems has been proposed before, but it’s probably a bad idea for the reasons discussed in Daniel Carpenter’s excellent article on the subject. Regardless, proponents of preemption would presumably oppose such a heavy-handed regulatory regime no matter how broad its preemption provisions were.

Premature and Overbroad Preemption Is a Bad Idea

Some might argue that the unprecedented nature of the moratorium was a warranted response to unprecedented circumstances. The difficulty of getting bills through a highly polarized Congress means that piecemeal preemption may be harder to pull off today than it was in the 20th century. Moreover, some observers believe that AI is an unprecedented technology (although there is disagreement on this point), while others argue that the level of state interest in regulating AI is unprecedented and therefore requires an unprecedentedly swift and broad federal response. That latter claim is, in my opinion, overstated: While a number of state bills that are in some sense about “AI” have been proposed, most of these will not become law, and the vast majority of those that do will not impose any meaningful burden on AI developers. That said, preemption proponents have legitimate concerns about state overregulation harming innovation. These concerns (much like concerns about existential risk or other hypothetical harms from powerful future AI systems) are currently speculative, because the state AI laws that are currently in effect do not place significant burdens on developers or deployers of AI systems. But premature regulation of an emerging technology can lead to regulatory lock-in and harmful path dependence, which bolsters the case for proactive and early preemption.

Because of these reasonable arguments for departing from the traditional iterative and narrow approach to preemption, establishing that the moratorium was unprecedented is less important than understanding why that approach to preemption has never been tried before. In my opinion, the reason is that any important new technology will require some amount of state regulation and some amount of federal regulation, and it’s impossible to determine the appropriate limits of state and federal authority ex ante.

There’s no simple formula for determining whether a given regulatory task should be undertaken by the states, the federal government, both, or neither. As a basic rule of thumb, though, the states’ case is strongest when the issue is purely local and relates to a state’s “police power”—that is, when it implicates a state’s duty to protect the health, safety, and welfare of its citizens. The federal government’s case, meanwhile, is typically strongest when the issue is purely one of interstate commerce or other federal concerns such as national security.

In the case of the Airline Deregulation Act, discussed above, Congress appropriately determined in 1978 that the regulation of airline rates and routes—an interstate commerce issue if ever there was one—should be undertaken by the federal government, and that the federal government’s approach should be deregulatory. But this was only one part of a back-and-forth exchange that took place over the course of decades in response to technological and societal developments. Regulation of airport noise levels, for example, implicates both interstate commerce (because airlines are typically used for interstate travel) and the police power (because “the area of noise regulation has traditionally been one of local concern”). It would not have been possible to provide a good answer to the question of who should regulate airport noise levels a few years after the invention of the airplane, because at that point modern airports—which facilitate the takeoff and landing of more than 44,000 U.S. flights every day—simply didn’t exist. Instead, a reasonable solution to the complicated problem was eventually worked out through a combination of court decisionslocal and federal legislation, and federal agency guidance. All of these responded to technological and societal developments (the jet engine; supersonic flight; increases in the number, size, and economic importance of airports) rather than trying to anticipate them.

Consider another example: electricity. Electricity was first used to power homes in the U.S. in the 1880s, achieved about 50 percent adoption by 1925, was up to 85 percent by 1945, and was used in nearly all homes by 1960. During its early days, electricity was delivered via direct current and had to be generated no more than a few miles from where it was consumed. Technological advances, most notably the widespread adoption of alternating current, eventually allowed electricity to be delivered to consumers from power plants much farther away, allowing for cheaper power due to economies of scale. Initially, the electric power industry was regulated primarily at the municipal level, but beginning in 1907 states began to assume primary regulatory authority. In 1935, in response to court decisions striking down state regulations governing the interstate sale of electricity as unconstitutional, Congress passed the Federal Power Act (FPA), which “authorized the [predecessor of the Federal Energy Regulatory Commission (FERC)] to regulate the interstate transportation and wholesale sale (i.e. sale for retail) of electric energy, while leaving jurisdiction over intrastate transportation and retail sales (i.e. sale to the ultimate consumer) in the hands of the states.” Courts later held that the FPA impliedly preempted most state regulations governing interstate wholesale sales of electricity.

If your eyes began to glaze over at some point toward the end of that last paragraph, good! You now understand that the process by which regulatory authority over the electric power industry was apportioned between the states and the federal government was extremely complicated. But the FPA only dealt with a small fraction of all the regulations affecting electricity. There are also state and local laws and regulations governing the licensing of electriciansthe depth at which power lines must be buried, and the criminal penalties associated with electricity theft, to name a few examples. By the same token, there are federal laws and rules concerning tax credits for wind turbine blade manufacturing, the legality of purchasing substation transformers from countries that are “foreign adversaries,” lightning protection for commercial space launch sitesuse of electrocution for federal executions, … and so on and so forth. I’m not arguing for more regulation here—it’s possible that the U.S. has too many laws, and that some of the regulations governing electricity are unnecessary or harmful. But even if extensive deregulation occurred, eliminating 90 percent of state, local, and federal rules relating to electricity, a great number of necessary or salutary rules would remain at both the federal and state levels. Obviously, the benefits of electricity have far exceeded the costs imposed by its risks. At the same time, no one denies that electricity and its applications do create some real dangers, and few sensible people dispute the fact that it’s beneficial to society for the government to address some of these dangers with common-sense regulations designed to keep people safe.

Again, the reconciliation moratorium would have applied, essentially, to any laws “limiting, restricting, or otherwise regulating” AI models or AI systems, unless they were “generally applicable” (in other words, unless they applied to AI systems only incidentally, in the same way that they applied to other technologies, and did not single out AI for special treatment). Imagine if such a restriction had been imposed on state regulation of electricity, at a similar early point in the development of that technology. The federal government would have been stuck licensing electricians, responding to blackouts, and deciding which municipalities should have buried as opposed to overhead power lines. If this sounds like a good idea to you, keep in mind that, regardless of your politics, the federal government has not always taken an approach to regulation that you would agree with. Allowing state and local control over purely local issues allows more people to have what they want than would a one-size-fits-all approach determined in Washington, D.C.

But the issue with the reconciliation moratorium wasn’t just that it did a bad job of allocating authority between states and the federal government. Any attempt to make a final determination of how that authority should be allocated for the next 10 years, no matter how smart its designers were, would have met with failure. Think about how difficult it would have been for someone living a mere five or 10 years after the invention of electricity to determine, ex ante, how regulatory authority over the new technology should be allocated between states and the federal government. It would, of course, have been impossible to do even a passable job. The knowledge that governing interstate commerce is traditionally the core role of the federal government, while addressing local problems that affect the health and safety of state residents is traditionally considered to be the core of a state’s police power, takes you only so far. Unless you can predict all the different risks and problems that the new technology and its applications will create as it matures, it’s simply not possible to do a good job of determining which of them should be addressed by the federal government and which should be left to the states.

Airplanes and electricity are far from the only technologies that can be used to prove this point. The other technologies commonly cited in historical case studies on AI regulation—railroads, nuclear power, telecommunications, and the internet—followed the same pattern. Regulatory authority over each of these technologies was allocated between states and the federal government via an iterative back-and-forth process that responded to technological and societal developments rather than trying to anticipate them. Preemption of well-defined categories of state law was typically an important part of that process, but preemption invariably occurred after the federal government had determined how it wanted to regulate the technology in question. The Carnegie Endowment’s excellent recent piece on the history of emerging technology preemption reaches similar conclusions and correctly observes that “[l]egislators do not need to work out the final division between federal and state governments all in one go.”

The Right Way to Do Preemption

Because frontier AI development is to a great extent an interstate commerce issue, it would in an ideal world be regulated primarily by the federal government rather than the states (although the fact that we don’t live in an ideal world complicates things somewhat). While the premature and overbroad attempts at preemption that have been introduced so far would almost certainly end up doing more harm than good, it should be possible (in theory, at least) to address legitimate concerns about state overregulation through an iterative process like the one described above. In other words, there is a right way to do preemption—although it remains to be seen whether any worthwhile preemption measure will ever actually be introduced. Below are four suggestions for how preemption of state AI laws ought to take place.

1. The scope of any preemption measure should correspond to the scope of the federal policies implemented.

The White House AI Action Plan laid out a vision for AI governance that emphasized the importance of innovation while also highlighting some important federal policy priorities for ensuring that the development and deployment of powerful future AI systems happens securely. Building a world-leading testing and evaluations ecosystem, implementing federal government evaluations of frontier models for national security risks, bolstering physical and cybersecurity at frontier labs, increasing standard-setting activity by the Center for AI Standards and Innovation (CAISI), investing in vital interpretability and control research, ramping up export control enforcement, and improving the federal government’s AI incident response capacity are all crucial priorities. Additional light-touch frontier AI security measures that Congress might consider include (to name a few) codifying and funding CAISI, requiring mandatory incident reporting for frontier AI incidents, establishing federal AI whistleblower protections, and authorizing mandatory transparency requirements and reporting requirements for frontier model development. None of these policies would impose any significant burden on innovation, and they might well provide significant public safety and national security benefits.

But regardless of which policies Congress ultimately chooses to adopt, the scope of preemption should correspond to the scope of the federal policies implemented. This correspondence could be close to 1:1. For instance, a federal bill that included AI whistleblower protections and mandatory transparency requirements for frontier model developers could be packaged with a provision preempting only state AI whistleblower laws (such as § 4 of California’s SB 53) and state frontier model transparency laws (such as § 2 of SB 53).

However, a more comprehensive federal framework might justify broader preemption. Under the legal doctrine of “field preemption,” federal regulatory regimes so pervasive that they occupy an entire field of regulation are interpreted by courts to impliedly preempt any state regulation in that field. It should be noted, however, that the “field” in question is rarely if ever so broadly defined that all state regulations relating to an important emerging technology are preempted. Thus, while courts interpreted the Atomic Energy Act to preempt state laws governing the “construction and operation” of nuclear power plants and laws “motivated by radiological concerns,” many state laws regulating nuclear power plants were left undisturbed. In the AI context, it might make sense to preempt state laws intended to encourage the safe development of frontier AI systems as part of a package including federal frontier AI safety policies. It would make less sense to implement the same federal frontier AI safety policies and preempt state laws governing self-driving cars, because this would expand the scope of preemption far beyond the scope of the newly introduced federal policy.

As the Airline Deregulation Act and the Internet Tax Freedom Act demonstrate, deregulatory preemption can also be a wise policy choice. Critically, however, each of those measures (a) preempted narrow and well-understood categories of state regulation and (b) reflected a specific congressional determination that neither the states nor the federal government should regulate in a certain well-defined area.

2. Preemption should focus on relatively narrow and well-understood categories of state regulation.

“Narrow” is relative, of course. It’s possible for a preemption measure to be too narrow. A federal bill that included preemption of state laws governing the use of AI in restaurants would probably not be improved if its scope was limited so that it applied only to Italian restaurants. Dean Ball’s thoughtful recent proposal provides a good starting point for discussion. Ball’s proposal would create a mandatory federal transparency regime, with slightly stronger requirements than existing state transparency legislation, and in exchange would preempt four categories of state law—state laws governing algorithmic pricing, algorithmic discrimination, disclosure mandates, and “mental health.”

Offering an opinion on whether this trade would be a good thing from a policy perspective, or whether it would be politically viable, is beyond the scope of this piece. But it does, at least, do a much better job than other publicly available proposals of specifically identifying and defining the categories of state law that are to be preempted. I do think that the “mental health” category is significantly overbroad; my sense is that Ball intended to address a specific class of state law regulating the use of AI systems to provide therapy or mental health treatment. His proposal would, in my opinion, be improved by identifying and targeting that category of law more specifically. As written, his proposed definition would sweep in a wide variety of potential future state laws that would be both (a) harmless or salutary and (b) concerned primarily with addressing purely local issues. Nevertheless, Ball’s proposal strikes approximately the correct balance between legitimate concerns regarding state overregulation and equally legitimate concerns regarding the unintended consequences of premature and overbroad preemption.

3. Deregulatory preemption should reflect a specific congressional determination against regulating in a well-defined area.

An under-discussed aspect of the reconciliation moratorium debate was that supporters of the moratorium, at least for the most part, did not claim that they were eliminating state regulations and replacing them with nothing as part of a deregulatory effort. Instead, they claimed that they were preempting state laws now and would get around to enacting a federal regulatory framework at some later date.

This was not and is not the correct approach. Eliminating states’ ability to regulate in an area, while decreasing Congress’s political incentives to reach a preemption-for-policy trade in the same area, decreases the odds that Congress will take meaningful action in the near future. And setting aside the political considerations, that kind of preemption would make it impossible for the normal back-and-forth process through which regulatory authority is usually allocated to take place. If states are banned from regulating, there’s no opportunity for Congress, federal agencies, courts, and the public to learn from experience what categories of state regulation are beneficial and which place unnecessary burdens on interstate commerce. Deregulatory preemption can be a legitimate policy choice, but when it occurs it should be the result of an actual congressional policy judgment favoring deregulation. And, of course, this congressional judgment should focus on specific, well-understood, and relatively narrow categories of state law. As a general rule of thumb, express preemption should take place only once Congress has a decent idea of what exactly is being preempted.

4. Preemption should facilitate, rather than prevent, an iterative process for allocating regulatory authority between states and the federal government.

As the case studies discussed above demonstrate, the main problem with premature and overbroad preemption is that it would make it impossible to follow the normal process for determining the appropriate boundaries of state and federal regulatory jurisdiction. Instead, preemption should take place after the federal government has formed some idea of how it wants to regulate AI and what specific categories of state law are inconsistent with its preferred regulatory scheme.

Ball’s proposal is instructive here as well, in that it provides for a time-limited preemption window of three years. Given the pace at which AI capabilities research is progressing, a 10- or even five-year moratorium on state regulation in a given area is far more problematic than a shorter period of preemption. This is, at least in part, because shorter preemption periods are less likely to prevent the kind of iterative back-and-forth process described above from occurring. Even three years may be too long in the AI governance context, however; three years prior to this writing, ChatGPT had not yet been publicly released. A two-year preemption period for narrowly defined categories of state law, by contrast, might be short enough to facilitate the kind of iterative process described above rather than preventing a productive back-and-forth from occurring.

***

Figuring out who should regulate an emerging technology and its applications is a complicated and difficult task that should be handled on an issue-by-issue basis. Preempting counterproductive or obnoxious state laws should be part of the process, but preempting broad categories of state law before we even understand what it is that we’re preempting is a recipe for disaster. It is true that there are costs associated with this approach; it may eventually allow some state laws that are misguided or harmful to innovation to go into effect. To the extent that such laws are passed, however, they will strengthen the case for preemption. Colorado’s AI Act, for example, has been criticized for being burdensome and difficult to comply with and has also generated considerable political support for broad federal preemption, despite the fact that it has yet to go into effect. By the same token, completely removing states’ ability to regulate, even as AI capabilities improve rapidly and real risks begin to manifest, may create considerable political pressure for heavy-handed regulation and ultimately result in far greater costs than industry would otherwise have faced. Ignoring the lessons of history and blindly implementing premature and overbroad preemption of state AI laws is a recipe for a disaster that would harm both the AI industry and the general public.

The Genesis Mission Executive Order: What It Does and How it Shapes the Future of AI-Enabled Scientific Research

Summary

On November 24, the White House released an Executive Order launching the Genesis Mission—a bold plan to build a unified national AI-enabled science platform linking federal supercomputers, secure cloud networks, public and proprietary datasets, scientific foundation models, and even automated laboratory systems. The Administration frames the Genesis Mission as a Manhattan Project-scale scientific effort.

The EO lays out the organizational and planning framework for the Genesis Mission and tasks the Department of Energy with assembling the resources required to launch it. Working in highly consequential scientific domains—such as biotechnology, where dual-use safety and security issues routinely arise—gives the federal government a timely opportunity to build the oversight and governance capacity that will be needed as AI-enabled science advances.

1. What the EO Actually Does

The EO directs the DOE and White House Office of Science and Technology Policy (OSTP) to spend the next year defining the scope of the Genesis Mission and proving what can be done using existing authority and appropriations. It’s important to keep in mind that an Executive Order cannot itself create new funding or new legal authority, so future steps will depend on Congressional action.

Mandated near-term tasks include:

  1. Identify at least twenty “science and technology challenges of national importance” that must span priority domains such as biotechnology, advanced manufacturing, critical materials, quantum computing, nuclear science, and semiconductors. DOE will start, and OSTP will expand and finalize the list.
  2. Inventory all relevant federal resources, including computing, data, networking, and automated experimentation capabilities.
  3. Define initial datasets and AI models and develop a plan with “risk-based cybersecurity measures” that will enable incorporating data from federally funded research, other agencies, academia, and approved private sector partners.
  4. Produce an initial demonstration of the “American Science and Security Platform,” using only currently available tools and legal authorities.

These are primarily coordination and planning tasks aimed at defining the scope of an integrated AI science platform and demonstrating what can be done with existing resources within DOE. DOE’s activities set forth in the EO appear to align with Section 50404 of the OBBBA reconciliation bill (H.R. 1), which appropriates $150 million through September 2026 to DOE for work on “transformational artificial intelligence models.” Although not referenced in the EO, Section 50404 directs DOE to develop a public-private infrastructure to curate large scientific datasets and create “self-improving” AI models with applications such as more efficient chip design and new energy technologies. DOE’s Section 50404 appropriation is the subject of an ongoing Request for Information (RFI), in which DOE is seeking input on how to structure and implement such public-private research consortia.

The EO does not itself mandate building the full system beyond DOE. Rather, these steps begin the process of assembling underlying infrastructure. The EO outlines broad interagency coordination, but key details need to be worked out, including who can access the platform, how users will be vetted, and whether it will be open to broad scientific use or limited to national security-priority domains.

In that sense, the EO is best understood as establishing the groundwork for a future AI-enabled and automated science infrastructure—while its full build-out will depend on Congress, other agencies, and private sector partnerships.

2. Who Holds the Pen

The Genesis Mission envisions centralized leadership for interagency coordination, with two primary actors:

Technical leadership will likely sit with Under Secretary for Science Darío Gil, who oversees the DOE national labs and major research programs. Strategic coordination, including interactions with other agencies and industry, will likely run through Michael Kratsios, OSTP Director and Presidential Science Advisor.

The EO directs only DOE to take specific actions. What this means is that ultimately the interagency coordination is more aspirational, and likely will depend on Congressional actions to add or redirect funding to work on the Genesis Mission. At this point, the EO envisions DOE as the primary operator of the ultimate platform with OSTP shaping strategy. The practical impact of the Mission will largely depend on how these resources are ultimately shared and made accessible across agencies, which the EO leaves open for now.

3. The Goal: Accelerating High-Stakes Science

Here’s where the Genesis Mission may be most consequential. The EO envisions a platform that sits at the center of scientific domains with national and economic significance. These are areas where integrating AI models, different kinds of information from government and private databases, and being able to run lots of experiments using automation can provide high leverage.

For example, in biological research, an integrated AI-science platform could accelerate drug development, improve biomanufacturing, strengthen pandemic preparedness, tackle chronic disease, and support emerging industries that can help economic growth and allow the United States to maintain global leadership. DOE is well positioned to contribute here, given its national laboratories, high-performance computing, and experience managing large-scale scientific infrastructure.

The Genesis Mission EO suggests that the Administration expects the Mission to support research with high scientific value as well as complex security and safety considerations. While it doesn’t reference new or existing regulations, the EO requires DOE to operate the platform consistent with:

A system that integrates large biological datasets, frontier-scale foundation models, and automated lab workflows could dramatically accelerate discovery. It’s important to keep in mind, however, that such capabilities can also intersect with longstanding dual-use concerns: areas where the same tools that advance beneficial research might also lower barriers to potential harms.

4. Why Governance Matters for the Genesis Mission

Biology offers a clear example of the kinds of oversight challenges that can arise as AI accelerates scientific research. AI and lab automation can lower barriers to manipulating or enhancing dangerous pathogens, which is often referred to as “gain-of-function” research.

Importantly, the launch of the Genesis Mission comes while key federal biosafety revisions are still in progress. In May, the White House issued Executive Order 14292, “Improving the Safety and Security of Biological Research.” That EO called for strengthening oversight of certain high-consequence biological research, including gain of function. It imposed several tasks on OSTP, including:

Since then, there has been partial progress towards these goals, including NIH and USDA funding bans on gain-of-function research. But several other updates called for in EO 14292 have not been finalized. The Genesis Mission creates both an opportunity and a need to advance this work. By accelerating AI-enabled scientific research, the Mission heightens the importance of clear, modernized biosafety and biosecurity guidance—and gives the Administration a natural venue to advance it.

As DOE begins integrating advanced computation, large biological datasets, and automated experimentation, it becomes even more valuable to clarify how federal guidance should apply to AI-augmented research. The Genesis Mission may ultimately help spur the release of updated oversight frameworks and encourage broader policy discussions—including potential legislation—on how to manage dual-use research in the era of integrated AI for science platforms.

These issues aren’t limited to biology either. The Genesis Mission EO names nuclear science, quantum computing, advanced materials, and other domains where AI-accelerated discovery creates both major opportunities and critical governance issues.

5. The Hard Policy Questions Ahead

At first, the Genesis Mission will likely be a largely DOE-run effort limited to federal researchers and a small group of partners. But if it grows along the ambitious lines the EO lays out, managing who can access it—and how—becomes far more challenging. Once integrated AI-driven systems can design, optimize, or automate significant parts of scientific research, regulation becomes both urgent and harder to enforce in a uniform way:

These private and academic systems may be entirely outside federal oversight, complicating attempts to build coherent guardrails.

If the Genesis Mission succeeds, it will generate substantial new scientific data that will help train more capable models and enable new research pathways. At the same time, access to more powerful models and broader datasets will increase the importance of developing effective policies for data governance, user access, and managing research across the government and with the private sector.

6. Bottom Line

The Genesis Mission sets an ambitious vision for a unified AI-enabled science platform within the federal government. Its success will depend on future funding, interagency participation, and sustained follow-through. But even at this early planning stage, the EO brings core policy issues to the surface: oversight, data governance, access rules, and how to manage research that cuts across agencies and private sector entities.

As DOE and OSTP begin work on the Genesis Mission, it also creates a timely opportunity for the federal government to update dual-use oversight frameworks, such as in biosafety as called for by EO 14292, and build governance structures needed for AI-accelerated science.

Legal Issues Raised by the Proposed Executive Order on AI Preemption

On November 19, 2025, a draft executive order that the Trump administration may issue as early as Friday, November 21 was publicly leaked. The six-page order consists of nine sections, including prefatory purpose and policy statements, a section containing miscellaneous “general provisions,” and six substantive provisions. This commentary provides a brief overview of some of the most important legal issues raised by the draft executive order (DEO). This commentary is not intended to be comprehensive, and LawAI may publish additional commentaries and/or updates as events progress and additional legal issues come to light.

As an initial matter, it’s important to understand what an executive order is and what legal effect executive orders have in the United States. An executive order is not a congressionally enacted statute or “law.” While Congress undoubtedly has the authority to preempt some state AI laws by passing legislation, the President generally cannot unilaterally preempt state laws by presidential fiat (nor does the DEO purport to do so). An executive order can publicly announce the policy goals of the executive branch of the federal government, and can also contain directives from the President to executive branch officials and agencies.

Issue 1: The Litigation Task Force

The DEO’s first substantive section, § 3, would instruct the U.S. Attorney General to “establish an AI Litigation Task Force” charged with bringing lawsuits in federal court to challenge allegedly unlawful state AI laws. The DEO suggests that the Task Force will challenge state laws that allegedly violate the dormant commerce clause and state laws that are allegedly preempted by existing federal regulations. The Task Force is also authorized to challenge state AI laws under any other legal basis that the Department of Justice (DOJ) can identify.

Dormant commerce clause arguments

Presumably, the DEO’s reference to the commerce clause refers to the dormant commerce clause argument laid out by Andreessen Horowitz in September 2025. This argument, which a number of commentators have raised in recent months, suggests that certain state AI laws violate the commerce clause of the U.S. Constitution because they impose excessive burdens on interstate commerce. LawAI’s analysis indicates that this commerce clause argument, at least with respect to the state laws specifically referred to in the DEO, is legally meritless and unlikely to succeed in court. We intend to publish a more thorough analysis of this issue in the coming weeks in addition to the overview included here.

In 2023, the Supreme Court issued an important dormant commerce clause opinion in the case of National Pork Producers Council v. Ross. The thrust of the majority opinion in that case, authored by Justice Gorsuch, is that state laws generally do not violate the dormant commerce clause unless they involve purposeful discrimination against out-of-state economic interests in order to favor in-state economic interests.

Even proponents of this dormant commerce clause argument typically acknowledge that the state AI laws they are concerned with generally do not discriminate against out-of-state economic interests. Therefore, they often ignore Ross, or cite the dissenting opinions while ignoring the majority. Their preferred precedent is Pike v. Bruce Church, Inc., a 1970 case in which the Supreme Court held that a state law with “only incidental” effects on interstate commerce does not violate the dormant commerce clause unless “the burden imposed on such commerce is clearly excessive in relation to the putative local benefits.” This standard opens the door for potential challenges to nondiscriminatory laws that arguably impose a “clearly excessive” burden on interstate commerce.

The state regulation that was invalidated in Pike would have required cantaloupes grown in Arizona to be packed and processed in Arizona as well. The only state interest at stake was the “protect[ion] and enhance[ment] of [cantaloupe] growers within the state.” The Court in Pike specifically acknowledged that “[w]e are not, then, dealing here with state legislation in the field of safety where the propriety of local regulation has long been recognized.”

Even under Pike, then, it’s hard to come up with a plausible argument for invalidating the state AI laws that preemption advocates are concerned with. Andreessen Horowitz’s argument is that the state proposals in question, such as New York’s RAISE Act, “purport to have significant safety benefits for their residents,” but in fact “are unlikely” to provide substantial safety benefits. But this is, transparently, a policy judgment, and one with which the state legislature of New York evidently disagrees. As Justice Gorsuch observes in Ross, “policy choices like these usually belong to the people and their elected representatives. They are entitled to weigh the relevant ‘political and economic’ costs and benefits for themselves, and ‘try novel social and economic experiments’ if they wish.” New York voters overwhelmingly support the RAISE Act, as did an overwhelming majority of New York’s state legislature when the bill was put to a vote. In my opinion, it is unlikely that any federal court will presume to override those policy judgments and substitute its own.

That said, it is possible to imagine a state AI law that would violate the dormant commerce clause. For example, a law that placed burdensome requirements on out-of-state developers while exempting in-state developers, in order to grant an advantage to in-state AI companies, would likely be unconstitutional. Since I haven’t reviewed every state AI bill that has been or will be proposed, I can’t say for sure that none of them would violate the dormant commerce clause. It is entirely possible that the Task Force will succeed in invalidating one or more state laws via a dormant commerce clause challenge. It does seem relatively safe, however, to predict that the specific laws referred to in the executive order and the state frontier AI safety laws most commonly referenced in discussions of preemption would likely survive any dormant commerce clause challenges brought against them.

State laws preempted by existing federal regulations

Section 3 of the DEO also specifically indicates that the AI Litigation Task Force will challenge state laws that “are preempted by existing Federal regulations.” It is possible for state laws to be preempted by federal regulations, and, as with the commerce clause issue discussed above, it’s possible that the Task Force will eventually succeed in invalidating some state laws by arguing that they are so preempted.

In the absence of significant new federal AI regulation, however, it is doubtful whether many of the state laws the DEO is intended to target will be vulnerable to this kind of legal challenge. Moreover, any state AI law that created significant compliance costs for companies and was plausibly preempted by existing federal regulations could be challenged by the affected companies, without the need for DOJ intervention. The fact that (to the best of my knowledge) no such lawsuit has yet been filed challenging the most notable state AI laws indicates that the new Task Force will likely be faced with slim pickings, at least until new federal regulations are enacted and/or state regulation of AI intensifies.

It seems likely that § 3’s reference to preemption via existing federal regulation is at least partially intended to refer to Communications Act preemption as discussed in the AI Action Plan. There is a major obstacle to preempting state AI laws under the Communications Act, however: the Communications Act provides the FCC (and sometimes courts) with some authority to preempt certain state laws regulating “telecommunications services” and “information services,” but existing legal precedents clearly establish that AI systems are neither “telecommunications services” nor “information services” under the Communications Act. In his comprehensive policy paper on FCC preemption of state AI laws, Lawrence J. Spiwak (a staunch supporter of preemption) analyzes the relevant precedents and concludes that “given the plain language of the Communications Act as well as the present state of the caselaw, it is highly unlikely the FCC will succeed in [AI preemption] efforts” and that “trying to contort the Communications Act to preempt the growing patchwork of disparate state AI laws is a Quixotic exercise in futility.” Harold Feld of Public Knowledge essentially agrees with this assessment in his piece on the same topic.

Alternative grounds

Section 3 also authorizes the Task Force to challenge state AI laws that are “otherwise unlawful” in the Attorney General’s judgment. The Department of Justice employs a great number of smart and creative lawyers, so it’s impossible to say for sure what theories they might come up with to challenge state AI laws. That said, preemption of state AI laws has been a hot topic for months now, and the best theories that have been publicly floated for preemption by executive action are the dormant commerce clause and Communications Act theories discussed above. This is, it seems fair to say, a bearish indicator, and I would be somewhat surprised if the Task Force managed to come up with a slam dunk legal argument for broad-based preemption that has hitherto been overlooked by everyone who’s considered this issue.

Issue 2: Restrictions on State Funding

Section 5 of the DEO contains two subsections that concern efforts to withhold federal grant funding from states that attempt to regulate AI. Subsection (a) indicates that Commerce will attempt to withhold non-deployment Broadband Equity Access and Deployment (BEAD) funding “to the maximum extent allowed by federal law” from states with AI laws listed pursuant to § 4 of the DEO, which instructs the Department of Commerce to identify state AI laws that conflict with the policy directives laid out in § 1 of the DEO. Subsection (b) instructs all federal agencies to assess their discretionary grant programs and determine whether existing or future grants can be withheld from states with AI laws that are challenged under § 3 or identified as undesirable pursuant to § 4.

In my view, § 5 of the DEO is the provision with the most potential to affect state AI legislation. While § 5 does not contain any attempt to actually preempt state AI laws, the threat of losing federal grant funds could have the practical effect of incentivizing some states to abandon their AI-related legislative efforts. And, as Daniel Cochrane and Jack Fitzhenry pointed out during the reconciliation moratorium fight, “Smaller conservative states with limited budgets and large rural populations need [BEAD] funds. But wealthy progressive states like California and New York can afford to take a pass and just keep enforcing their tech laws.” While politicians in deep blue states will be politically incentivized to fight the Trump administration’s attempts to preempt overwhelmingly popular AI laws even if it means losing access to some federal funds, politicians in red states may instead be incentivized to avoid conflict with the administration.

Section 5(a): Non-deployment BEAD funding

Section 5(a) of the DEO is easier to analyze than § 5(b), because it clearly identifies the funds that are in jeopardy—non-deployment BEAD funding. The BEAD program is a $42.45 billion federal grant program established by Congress in 2021 for the purpose of facilitating access to reliable, high-speed broadband internet for communities throughout the U.S. A portion of the $42.45 billion total was allocated to each of 56 states and territories in June 2023 by the National Telecommunications and Information Administration (NTIA). In June 2025, the NTIA announced a restructuring of the BEAD program that eliminated many Biden-era requirements and rescinded NTIA approval for all “non-deployment” BEAD funding, i.e., BEAD funding that states intended to spend on uses other than actually building broadband infrastructure. The total amount of BEAD funding that will ultimately be classified as “non-deployment” is estimated to be more than $21 billion.

BEAD funding was previously used as a carrot and stick for AI preemption in June 2025, as part of the effort to insert a moratorium or “temporary pause” on state AI regulation into the most recent reconciliation bill. There are two critical differences between the attempted use of BEAD funding in the reconciliation process and its use in the DEO, however. First, the DEO is, obviously, an executive order rather than a legislative enactment. This matters because agency actions that would be perfectly legitimate if authorized by statute can be illegal if undertaken without statutory authorization. And secondly, while the final drafts of the reconciliation moratorium would only have jeopardized BEAD funding belonging to states that chose to accept a portion of $500 million in additional BEAD funding that the reconciliation bill would have appropriated, the DEO would jeopardize non-deployment BEAD funding belonging to any state that attempts to regulate AI in a manner deemed undesirable under the DEO.

The multibillion-dollar question here is: can the administration legally withhold BEAD funding from states because those states enact or enforce laws regulating AI? I am going to cop out and say, honestly, that I don’t know for certain at this point in time. There are a number of potential legal issues with the course of action that the DEO contemplates, but as of November 20, 2025 (one day after the DEO first leaked) no one has published a definitive analysis of whether the administration will be able to overcome these obstacles.

The Trump administration’s Department of Transportation (DOT) recently attempted a maneuver similar to the one contemplated in the DEO when, in response to an executive order directing agencies to “undertake any lawful actions to ensure that so-called ‘sanctuary’ jurisdictions… do not receive access to federal funds,” the DOT attempted to add conditions to all DOT grant agreements requiring grant recipients to cooperate in the enforcement of federal immigration law. Affected states promptly sued to challenge the addition of this grant condition and successfully secured a preliminary injunction prohibiting DOT from implementing or enforcing the conditions. In early November 2025, the federal District Court for the District of Rhode Island ruled that the challenged conditions were unlawful for three separate reasons: (1) imposing the conditions exceeded the DOT’s statutory authority under the laws establishing the relevant grant programs; (2) imposing the conditions was “arbitrary and capricious,” in violation of the Administrative Procedure Act; and (3) imposing the conditions violated the Spending Clause of the U.S. Constitution. It remains to be seen whether the district court’s ruling will be upheld by a federal appellate court and/or by the U.S. Supreme Court.

Suppose that, in the future, the Department of Commerce decides to withhold non-deployment BEAD funding from states with AI laws deemed undesirable under the DEO. States could challenge this decision in court and ask the court to order NTIA to release the previously allocated non-deployment funds to the states, arguing that the withholding of funds exceeded NTIA’s authority under the statute authorizing BEAD, violated the APA, and violated the Spending Clause. Each of these arguments seems at least somewhat plausible, on an initial analysis. Nothing in the statute authorizing BEAD appears to give the federal government unlimited discretion to withhold BEAD funds to vindicate policy goals that have little or nothing to do with access to broadband; rescinding previously awarded grant funds and then withholding them in order to further goals not contemplated by Congress is at least arguably arbitrary and capricious; and the course of action proposed in the DEO is, arguably, impermissibly coercive in violation of the Spending Clause.

AI regulation is a less politically divisive issue than immigration enforcement, and a cynical observer might assume that this would give states in this hypothetical AI case a better chance on appeal than the states in the DOT immigration conditions case discussed above. However, there are a number of differences between the DOT conditions case and the course of action contemplated in the DEO that could make it harder—or easier—for states to prevail in court. Accurately estimating states’ chances of success with high confidence will take more than one day’s worth of analysis.
It should also be noted that, regardless of whether or not states could eventually prevail in a hypothetical lawsuit, the prospect of having BEAD funding denied or delayed, perhaps for years, could be enough to discourage some states from enacting AI legislation of a type disfavored by the Department of Commerce under the DEO.

Section 5(b): Other discretionary agency funding

In addition to withholding non-deployment BEAD funding, the DEO would instruct agencies throughout the executive branch to “take immediate steps to assess their discretionary grant programs and determine whether agencies may condition such grants on States either not enacting an AI law that conflicts with the policy of this order… or, for those States that have enacted such laws, on those States entering into a binding agreement with the relevant agency not to enforce any such laws during any year in which it receives the discretionary funding.”

The legality of this contemplated course of action, and its likelihood of being upheld in court, is even more difficult to conclusively determine ex ante than the legality and prospects of the BEAD withholding discussed above. The federal government distributes about a trillion dollars a year in grants to state and local governments, and more than a quarter of that money is in the form of discretionary grants (as opposed to grants from mandatory programs such as Medicaid). That’s a lot of money, and it’s broken up into a lot of different discretionary grants. It’s likely that many of the arguments against withholding grant money from AI-regulating states would be the same from one grant to another. However, it is also likely that there are some discretionary grants to states which could more reasonably be conditioned on compliance with the President’s deregulatory AI policy directives and other grants for which such conditioning would be less reasonable. Ultimately, further research into this issue is needed to determine how much state grant funding, if any, is legitimately at risk.

Issue 3: Federal Reporting and Disclosure

Section 6 of the DEO instructs the FCC, in consultation with AI czar David Sacks, to “initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” Presumably, “conflicting state laws” is intended to refer to state AI transparency laws such as California’s SB 53 and New York’s RAISE Act. It’s not clear from the language of the DEO what legal authority this “Federal reporting and disclosure standard” would be promulgated under. Under the Biden administration, the Department of Commerce’s Bureau of Industry and Security (BIS) attempted to impose reporting requirements on frontier model developers under the information-gathering authority provided by § 705 of the Defense Production Act—but § 705 has historically been used by BIS rather than the FCC, and I am not aware of any comparable authority that would authorize the FCC to implement a mandatory “federal reporting and disclosure standard” for AI models.

Generally, regulatory preemption can only occur when Congress has granted an executive-branch agency authority to promulgate regulations and preempt state laws inconsistent with those regulations. This authority can be granted expressly or by implication, but, as discussed above in the discussion of Communications Act preemption under § 3 of the DEO, the FCC has never before asserted that it possesses any significant regulatory authority (express or otherwise) over any aspect of AI development. It’s possible that the FCC is relying on a creative interpretation of its authority under the Communications Act—FCC Chairman Brendan Carr previously indicated that the FCC was “taking a look” at whether the Communications Act grants the FCC authority to regulate AI and preempt onerous state laws. However, as discussed above, legal commentators almost universally agree that “[n]othing in the Communications Act confers FCC authority to regulate AI.”

It’s possible that the language of the EO is simply meant to indicate that the FCC and Sacks will suggest a standard that may then be enacted into law by Congress. This would certainly overcome the legal obstacles discussed above, and could (depending on the language of the statute) allow for preemption of state AI transparency laws. However, it would require passing new federal legislation, which is easier said than done.

Issue 4: Preemption of state laws for “deceptive practices” under the FTC Act

Section 7 of the DEO directs the Federal Trade Commission (FTC) to issue a policy statement arguing that certain state AI laws are preempted by the FTC Act’s prohibition on deceptive commercial practices. Presumably, the laws which the DEO intends for this guidance to target include Colorado’s AI Act, which the DEO’s Purpose section accuses of “forc[ing] AI models to embed DEI in their programming, and to produce false results in order to avoid a ‘differential treatment or impact’…” on enumerated demographic groups, and other similar “algorithmic discrimination” laws. A policy statement on its own generally cannot preempt state laws, but it seems likely that the policy statement that the DEO instructs the FTC to create would be relied upon in subsequent preemption-related regulatory efforts and/or by litigants seeking to prevent enforcement of the allegedly preempted laws in court.

While the Trump administration has previously expressed disapproval of “woke” AI development practices, for example in the recent executive order on “Preventing Woke AI in the Federal Government,” this argument that the FTC Act’s prohibition on UDAP (unfair or deceptive acts or practices in or affecting commerce) preempts state algorithmic discrimination laws is, as far as I am aware, new. During the Biden administration, Lina Khan’s FTC published guidance containing an arguably similar assertion: that the “sale or use of—for example—racially biased algorithms” would be an unfair or deceptive practice under the FTC Act. Khan’s FTC did not, however, attempt to use this aggressive interpretation of the FTC Act as a basis for FTC preemption of any state laws.

Colorado’s AI statute has been widely criticized, including by Governor Jared Polis (who signed the act into law) and other prominent Colorado politicians. In fact, the law has proven so problematic for Colorado that Governor Polis, a Democrat, was willing to cross party lines in order to support broad-based preemption of state AI laws for the sake of getting rid of Colorado’s. Therefore, an attempt by the Trump administration to preempt Colorado’s law (or portions thereof) might meet with relatively little opposition from within Colorado. It’s not clear who, if anyone, would have standing to challenge FTC preemption of Colorado’s law if Colorado’s attorney general refused to do so. But Colorado is not the only state with a law prohibiting algorithmic discrimination, and presumably the guidance the DEO instructs the FTC to produce would inform attempts to preempt other “woke” state AI laws as well as Colorado’s.

The question of how those attempts would fare in federal court is an interesting one, and I look forward to reading analysis of the issue from commentators with expertise regarding the FTC Act and algorithmic discrimination laws. Unfortunately, I am not such a commentator and will therefore plead ignorance on this point.

The Unitary Artificial Executive

Editor’s note: The following are remarks delivered on October 23, 2025, at the University of Toledo Law School’s Stranahan National Issues Forum. Watch a recording of the address here. This transcript was originally posted at Lawfare.

Good afternoon. I’d like to thank Toledo Law School and the Stranahan National Issues Forum for the invitation to speak with you today. It’s an honor to be part of this series.

In 1973, the historian Arthur Schlesinger Jr., who served as a senior adviser in the Kennedy White House, gave us “The Imperial Presidency,” documenting the systematic expansion of unilateral presidential power that began with Washington and that Schlesinger was chronicling in the shadow of Nixon and Watergate. Each administration since then, Democrat and Republican alike, has argued for expansive executive authorities. Ford. Carter. Reagan. Bush 1. Clinton. Bush 2. Obama. The first Trump administration. Biden. And what we’re watching now in the second Trump administration is breathtaking.

This pattern of ever-expanding executive power has always been driven partly by technology. Indeed, through human history, transformative technologies drove large-scale state evolution. Agriculture made populations large enough for taxation and conscription. Writing enabled bureaucratic empires across time and distance. The telegraph and the railroad annihilated space, centralizing control over vast territories. And computing made the modern administrative state logistically possible. 

For American presidents specifically, this technological progression has been decisive. Lincoln was the first “wired president,” using the telegraph to centralize military command during the Civil War. FDR, JFK, and Reagan all used radio and then television to “go public” and speak directly to the masses. Trump is the undisputed master of social media.

I’ve come here today to tell you: We haven’t seen anything yet.

Previous expansions of presidential power were still constrained by human limitations. But artificial intelligence, or AI, eliminates those constraints—producing not incremental growth but structural transformation of the presidency. In this lecture I want to examine five mechanisms through which AI will concentrate unprecedented authority in the White House, turning Schlesinger’s “Imperial Presidency” into what I call the “Unitary Artificial Executive.” 

The first mechanism is the expansion of emergency powers. AI crises—things like autonomous weapons attacks or AI-enabled cybersecurity breaches—justify broad presidential action, exploiting the same judicial deference to executive authority in emergencies that courts have shown from the Civil War through 9/11 to the present. 

Second, AI enables perfect enforcement through automated surveillance and enforcement mechanisms, eliminating the need for the prosecutorial discretion that has always limited executive power. 

The third mechanism is information dominance. AI-powered messaging can saturate the public sphere through automated propaganda and micro-targeted persuasion, overwhelming the marketplace of ideas.

Fourth, AI in national security creates what scholars call the “double black box”—inscrutable AI nested inside national security secrecy. And when these inscrutable systems operate at machine speed, oversight becomes impossible. Cyber operations and autonomous weapons engagements complete in milliseconds—too fast and too opaque for meaningful oversight.

And fifth—and most dramatically—AI can finally realize the vision of the unitary executive. By that I mean something specific: not just a presidency with broad substantive authorities, but one that exerts complete, centralized control over executive branch decision-making. AI can serve as a cognitive proxy throughout the executive branch, injecting presidential preferences directly into algorithmic decisions, making unitary control technologically feasible for the first time.

These five mechanisms operate in two different ways. The first four expand the practical scope of presidential authority—emergency powers, enforcement, information control, and national security operations. They expand what presidents can do. The fifth mechanism is different. It’s about control. It determines how those powers are exercised. And the combination of these two creates an unprecedented concentration of power.

My argument is forward-looking, but it’s not speculative. From a legal perspective, these mechanisms build on existing presidential powers and fit comfortably within current constitutional doctrine. From a technological perspective, none of this requires artificial superintelligence or even artificial general intelligence. All of these capabilities are doable with today’s tools, and certainly achievable within the next few years.

Now, before we go further, let me tell you where I’m coming from. My academic career has focused on two research areas: first, the regulation of emerging technology, and, second, executive power. Up until now, these have been largely separate. This lecture brings those two tracks together.

But I also have some practical experience that’s relevant to this project. Before becoming a law professor, I was a junior policy attorney in the National Security Division at the Department of Justice. In other words, I was a card-carrying member of what the current administration calls the “deep state.”

One thing I learned is that the federal bureaucracy is very hard to govern. Decision-making is decentralized, information is siloed, civil servants have enormous autonomy—not so much because of their formal authority but because governing millions of employees is, from a practical perspective, impossible. That practical ungovernability is about to become governable.

Together with Nicholas Bednar, my colleague at the University of Minnesota Law School, I’ve been researching how this transformation might happen—and what it means for constitutional governance. This lecture is the first draft of the research we’ve been conducting.

So let’s jump in. To understand how the five mechanisms of expanded presidential power will operate—and why they’re not speculative—we need to start with AI’s actual capabilities. So what can AI actually do today, and what will it be able to do in the near future?

What Can AI Actually Do?

Again, I’m not talking about artificial general intelligence or superintelligence—those remain speculative, possibly decades away. I’m talking about today’s capabilities, including technology that is right now deployed in government systems. 

It’s helpful to think of AI as a pipeline with three stages: collection, analysis, and execution.

The first stage is data collection at scale. The best AI-powered facial recognition achieves over 99.9 percent accuracy and Clearview AI—used by federal and state law enforcement—has over 60 billion images. The Department of Defense’s Project Maven—an AI-powered video analysis program—demonstrates the impact: 20 people using AI now replicate what required 2,000. That’s a 100-fold increase in efficiency.

The second stage is data analysis. AI analyzes data at scales humans cannot match. FINRA—the financial industry self-regulator—processes 600 billion transactions daily using algorithmic surveillance, a volume that would require an army of analysts. FBI algorithms assess thousands of tip line calls a day for threat level and credibility. Systems like those from the technology company Palantir integrate databases across dozens of agencies in real time. All this analysis happens continuously, comprehensively, and faster than human oversight.

The third stage is automated execution, which operates at speeds and scales outstripping human capabilities. For example, DARPA’s AI-controlled F-16 has successfully engaged human pilots in mock dogfights, demonstrating autonomous combat capability. And the federal cybersecurity agency’s autonomous systems block more than a billion suspicious network connection requests across the federal government every year.

To summarize: AI can sense everything, process everything, and act on everything—all at digital speed and scale.

These are today’s capabilities—not speculation about future AI. But they’re also just the baseline. And they’re scaling up dramatically—driven by two forces. 

The first driver is the internal trajectory of AI itself. Training compute—the processing power used to build AI systems—has increased four to five times per year since 2010. Epoch AI, a research organization tracking AI progress, projects that frontier AI models will use thousands of times more compute than OpenAI’s GPT-4 by 2030, with training clusters costing over $100 billion. 

What will this enable? By 2030 at the latest, AI should be capable of building large-scale software projects, producing advanced mathematical proofs, and engaging in multi-week autonomous research. In government, that means AI systems that don’t just analyze but execute complete, large-scale tasks from start to finish. 

The second driver of AI advancement is geopolitical competition. China’s 2017 AI Development Plan targets global leadership by 2030, backed by massive state investment. They’ve deployed generative AI news anchors and built the nationwide Skynet video surveillance system—and yes, they actually called it that. China’s technical capabilities are advancing rapidly—the DeepSeek breakthrough earlier this year demonstrated that Chinese researchers can match or exceed Western AI performance, often at a fraction of the cost.

In today’s polarized Washington, there’s only one thing Democrats and Republicans agree on: China is a threat that must be confronted. That consensus is driving much of AI policy. So it’s unsurprising that the administration’s recent AI Action Plan frames the U.S. response as seeking “unquestioned … technological dominance.” Federal generative AI use cases have increased ninefold in one year, and the Defense Department awarded $800 million in AI contracts this past July. The department has also established detailed procedures for developing autonomous lethal weapons, reflecting the Pentagon’s assumption that such systems are the future. 

It’s easy to see how this competitive dynamic could be used to justify concentrating AI in the executive branch. “We can’t afford congressional delays. Transparency would give adversaries advantages. Traditional deliberation is incompatible with the speed of AI development.” The AI arms race could easily become a permanent emergency justifying rapid deployment.

Five Mechanisms Through Which AI Concentrates Presidential Power

So those are the drivers of AI progress—rapidly advancing capabilities and geopolitical pressure. Now let’s examine the five distinct mechanisms through which these forces will actually concentrate presidential power.

Mechanism 1: Emergency Powers

Presidential emergency powers rest on two sources with deep historical roots. The first is inherent presidential authority under Article II. For example, during the Civil War, Lincoln blockaded Southern ports, increased the army, and spent unauthorized funds, all claiming inherent constitutional authority as commander in chief.

The second source of emergency powers are explicit congressional delegations. When FDR closed every bank in March 1933, he did so under the Trading with the Enemy Act. After 9/11, Congress passed an Authorization for Use of Military Force—still in effect two decades later and the source of ongoing military operations across multiple continents. Today the presidency operates under more than 40 continuing national emergencies. For example, Trump has invoked the International Emergency Economic Powers Act (IEEPA) to impose many of his ongoing tariffs, declaring trade imbalances a national security emergency.

With both sources, courts usually defer. From the Prize Cases upholding Lincoln’s Southern blockade through Korematsu affirming Japanese internment to Trump v. Hawaii permitting the first Trump administration’s Muslim travel bans, the Supreme Court has generally granted presidents extraordinary latitude during emergencies. There are of course exceptions—Youngstown and the post-9/11 cases like Hamdi and Boumediene being the most famous—but the pattern is clear: When the president invokes national security or emergency powers, judicial review is limited. 

So what has constrained emergency powers? The emergencies themselves. Throughout history, emergencies were rare and time limited—the Civil War, the Great Depression, Pearl Harbor, 9/11. Wars ended, and crises receded. Our separation-of-powers framework has worked because it assumes emergencies have generally been the temporary exception, not the norm.

AI breaks this assumption.

AI empowers adversaries asymmetrically—giving offensive capabilities that outpace defensive responses. Foreign actors can use AI to identify vulnerabilities, automate attacks, and target critical infrastructure at previously impossible scale and speed. The same AI capabilities that strengthen the president also strengthen our adversaries, creating a perpetual heightened threat that justifies permanent emergency powers. 

Here’s what an AI-enabled emergency might look like. A foreign adversary uses AI to target U.S. critical infrastructure—things like the power grid, financial systems, or water treatment. Within hours, the president invokes IEEPA, the Defense Production Act, and inherent Article II authority. AI surveillance monitors all network traffic. Algorithmic screening begins for financial transactions. And compliance monitoring extends across critical infrastructure.

The immediate crisis might pass in 48 hours, but the emergency infrastructure never gets dismantled. Surveillance remains operational, and each emergency builds infrastructure for the next one.

Why does our constitutional system permit this? First, speed: Presidential action completes before Congress can react. Second, secrecy: Classification shields details from Congress, courts, and the public. Third, judicial deference: Courts defer almost automatically when “national security” and “emergency” appear in the same sentence. And, as if to add insult to injury, the president’s own AI systems might soon be the ones assessing threats and determining what counts as an emergency.

Mechanism 2: Perfect Enforcement

Emergency powers are—theoretically, at least—episodic. But enforcement of the laws happens continuously, every day, in every interaction between citizen and state. That’s where the second mechanism—perfect enforcement—operates.

Pre-AI governance depends on enforcement discretion. We have thousands of criminal statutes and millions of regulations, and so, inevitably, prosecutors have to choose cases, agencies have to prioritize violations, and police have to exercise judgment. The Supreme Court has recognized this necessity: In cases like Heckler v. ChaneyBatchelder, and Wayte, the Court held that non-enforcement decisions are presumptively unreviewable because agencies must allocate scarce resources. This discretion prevents tyranny by allowing mercy, context, and human judgment. 

AI eliminates that necessity. When every violation can be detected and every rule can be enforced, enforcement discretion becomes a choice rather than a practical constraint. The question becomes: What happens when the Take Care Clause meets perfect enforcement? Does the Take Care Clause allow the president to enforce the laws to the hilt? Might it require him to? 

As an example, consider what perfect immigration enforcement might look like. (And you can imagine this across every enforcement domain: tax compliance, environmental violations, workplace safety—even traffic laws.) Already facial recognition databases cover tens of millions of Americans, real-time camera networks monitor movement, financial systems track transactions, social media analysis identifies patterns, and automated risk assessment scores individuals. Again, China is leading the way—its “social credit” system demonstrates what’s possible when these technologies are integrated.

Now imagine the president directs DHS to do the same: build a single AI system that identifies every visa overstay and automatically generates enforcement actions. There are no more “enforcement priorities”—the algorithm flags everyone, and ICE officers blindly execute its millions of directives with perfect consistency.

Why does the Constitution allow this? The Take Care Clause traditionally required discretion because resource limits made total enforcement impossible. But AI changes this. Now the Take Care Clause can be read as consistent with eliminating discretion—the president isn’t violating his duty by enforcing everything, he’s just being thorough.

More aggressively: The president might argue that perfect enforcement is not just permitted but required. Congress wrote these laws, and the president is merely faithfully executing what Congress commanded now that technology makes it possible. If there’s no resource constraint, there’s no justification for discretion.

What about Equal Protection or Due Process? The Constitution might actually favor algorithmic enforcement. Equal Protection could be satisfied by perfect consistency if algorithmic enforcement treats identical violations identically, eliminating the arbitrary disparities that plague human judgment. And Due Process might be satisfied if AI proves more accurate than humans, which it may well be. Power once dispersed among millions of fallible officials becomes concentrated in algorithmic policy that could, compared to the human alternative, be more consistent, more accurate, and more just.

There’s one final effect that perfect enforcement produces: It ratchets up punishment beyond congressional intent. Congress wrote laws assuming enforcement discretion would moderate impact. They set harsh penalties knowing prosecutors would focus on serious cases and agencies would prioritize egregious violations, while minor infractions would largely be ignored.

But AI removes that backdrop. When every violation is enforced—even trivial ones Congress never expected would be prosecuted—the net effect is dramatically higher punitiveness. Congress calibrated the system assuming discretion would filter out minor cases. AI enforces everything, producing an aggregate severity Congress never intended.

Mechanism 3: Information Dominance

The first two mechanisms concentrating presidential power—emergency powers and perfect enforcement—expand what the president can do. The third mechanism is about controlling what citizens know. AI enables the president to saturate public discourse at unprecedented scale. And if the executive controls what citizens see, hear, and believe, how can Congress, courts, or the public effectively resist?

The Supreme Court has held that the First Amendment doesn’t restrict the government’s own speech. This government speech doctrine means that the government can select monumentschoose license plate messages, communicate preferred policies—all with no constitutional limit on volume, persistence, or sophistication.

Until now, practical constraints limited the scale of this speech—more messages required more people, more time, and more resources. AI eliminates these constraints, enabling content generation at near-zero marginal cost, operating across all platforms simultaneously, and delivering personalized messages to every citizen. The government speech doctrine never contemplated AI-powered saturation, and there is no limiting principle in existing case law.

Again, look to China for the future—it’s already using AI to saturate public discourse. In August, leaked documents revealed that GoLaxy, a Chinese AI company, built a “Smart Propaganda System”—AI that monitors millions of posts daily and generates personalized counter-messaging in real time, producing content that “feels authentic … and avoids detection.” The Chinese government has used it to suppress Hong Kong protest movements and influence Taiwanese elections. 

Now imagine an American president deploying these capabilities domestically.

It’s 2027. A major presidential scandal breaks—Congress investigates, courts rule executive actions unconstitutional, and in response the Presidential AI Response System activates. It floods social media platforms, news aggregators, and recommendation algorithms with government-generated content.

You’re a suburban Ohio parent worried about safety, and your phone shows AI-generated content about how the congressional investigation threatens law enforcement funding, accompanied by fake “local crime statistics.” Your neighbor, a student at the excellent local law school, is concerned about civil liberties—she sees completely different content about “partisan witch hunts” undermining due process. Same scandal, different narratives—the public can’t even agree on basic facts.

The AI system operates in three layers. First, it generates personalized messaging, detecting which demographics are persuadable and which narratives are gaining traction, A/B testing and adjusting counter-messages in real time. Second, it manipulates platform algorithms, persuading social media companies to down-rank “disinformation”—which means congressional hearings never surface in your feed and news about court decisions get buried. Third, it saturates public discourse through sheer volume, generating millions of messages across all platforms that drown out opposition not through censorship but through scale that private speakers can’t match. 

And all the while the First Amendment offers no constraint because the government speech doctrine allows the government to say whatever it wants, as much as it wants.

Information dominance makes resistance to the other mechanisms impossible. How do you organize opposition to emergency powers if you never hear about them? How do you resist perfect enforcement if you’ve been convinced it’s necessary? And how do you check national security decisions if you’re convinced of the threat—and if you can’t understand how the AI made the decision in the first place?

Which brings us to the fourth mechanism.

Mechanism 4: The National Security Black Box

National security is where presidential power reaches its apex. The Constitution grants the president enormous authority as commander in chief, with control over intelligence and classification, and courts have historically granted extreme judicial deference. Courts defer to military decisions, and the “political question” doctrine bars review of many national security judgments.

Congress retains constitutional checks—the power to declare war, appropriate funds, demand intelligence briefings, and conduct investigations. But AI creates what University of Virginia law professor Ashley Deeks calls the “double black box”—a problem that renders these checks ineffective.

The first—inner—box is AI’s opacity. AI systems are inscrutable black boxes that even their designers can’t fully explain. Congressional staffers lack technical expertise to evaluate them, and courts have no framework for passing judgment on algorithmic military judgments. No one—not even the executive branch officials nominally in charge—can explain why the AI reached a particular decision.

The second—outer—box is traditional national security secrecy. Classification shields operational details and the state secrets privilege blocks judicial review. The executive controls intelligence access, meaning Congress depends on the executive for the very information needed for oversight.

These layers combine: Congress can’t oversee what it can’t see or understand. Courts can’t review what they can’t access or evaluate. The public can’t hold anyone accountable for what’s invisible and incomprehensible.

And then speed makes things worse. AI operations complete in minutes, if not seconds, creating fait accompli before oversight can engage. By the time Congress learns what happened through classified briefings, facts on the ground have changed. Even if Congress could overcome both layers of inscrutability, it would be too late to restrain executive action.

Consider what this could look like in practice. It’s 3:47 a.m., and a foreign military AI probes U.S. critical infrastructure: This time it’s the industrial-control systems that control the eastern seaboard’s electrical grid.

Just 30 milliseconds later, U.S. Cyber Command’s AI detects the intrusion and assesses a 99.7 percent probability that this is reconnaissance for a future attack. 

Less than a second later, the AI decision tree executes. It evaluates options—monitoring is insufficient, counter-probing is inadequate, blocking would only be temporary—and selects a counterattack targeting foreign military command and control. The system accesses authorization from pre-delegated protocols and deploys malware.

Three minutes after the initial probe, the U.S. AI has disrupted foreign military networks, taking air defense offline, compromising communications, and destabilizing the attackers’ own power grids.

At 3:51 a.m., a Cyber Command officer is notified of the completed operation. At 7:30a.m., the president receives a briefing over breakfast of a serious military operation that she—supposedly the commander in chief—had no role in. But she’s still better off than congressional leadership, which only learns about the operation later that day when CNN breaks the story.

This won’t be an isolated incident. Each AI operation completes before oversight is possible, establishing precedent for the next. By the time Congress or courts respond, strategic facts have changed. The constitutional separation of war powers requires transparency time—both of which AI operations eliminate.

Mechanism 5: Realizing the Unitary Executive

The first four mechanisms—emergency powers, perfect enforcement, information dominance, and inscrutable national security decisions—expand the scope of presidential power. Each extends presidential reach.

But the fifth mechanism is different. It’s not about doing more but about controlling how it gets done. After all, how is a single president supposed to control a bureaucracy of nearly 3 million employees making untold decisions every day? The unitary executive theory has been debated for over two centuries and has recently become the dominant constitutional position at the Supreme Court. But in all this time it’s always been, practically speaking, impossible. AI removes that practical constraint.

Article II, Section 1, states that “The executive Power shall be vested in a President.” THE executive power. A President. Singular. This is the textual foundation for the unitary executive theory: the idea that all executive authority flows through one person and that this one person must therefore control all executive authority. 

The main battleground for this theory has been unilateral presidential firing authority. If the president can fire subordinates at will, control follows. The First Congress debated this in 1789, when James Madison proposed that department secretaries be removable by the president alone. Congress’s decision at the time implied that the president had such a power, but we’ve been fighting about presidential control ever since. 

The Supreme Court has zigzagged on this issue, from Myers in 1926 affirming presidential removal power, to Humphrey’s Executor less than a decade later carving out huge exceptions for independent agencies, to Morrison v. Olson in 1988, where Justice Antonin Scalia’s lone dissent defended the unitary executive. But by Seila Law v. CFPB in 2020, Scalia’s dissent had become the majority view. Unitary executive theory is now ascendant. (And we’ll see how far the Court pushes it when it decides on Federal Reserve Board independence later this term.)

But in a practical sense, the constitutional questions have always been second-order. Even if the president had constitutional authority for unitary control, practical reality made it impossible. Harry Truman famously quipped about Eisenhower upon his election in 1952: “He’ll sit here [in the Oval Office] and he’ll say, ‘Do this! Do that!’ And nothing will happen. Poor Ike—it won’t be a bit like the Army. He’ll find it very frustrating.”

One person just can’t process information from millions of employees, supervise 400 agencies, and know what subordinates are doing across the vast federal bureaucracy. Career civil servants can slow-roll directives, misinterpret guidance, quietly resist—or simply just not know what the president wants them to do. The real constraint on presidential power has always been practical, not constitutional.

But AI removes those constraints. It transforms the unitary executive theory from a constitutional dream into an operational reality.

Here’s a concrete example—real, not hypothetical. In January, the Trump administration sent a “Fork in the Road” email to federal employees: return to office, accept downsizing, pledge loyalty, or take deferred resignation. DOGE—the Department of Government Efficiency—deployed Meta’s Llama 2 AI model to review and classify responses. In a subsequent email, DOGE asked employees to describe weekly accomplishments and used AI to assess whether work was mission critical. If AI can determine mission-criticality, it can assess tone, sentiment, loyalty, or dissent.

DOGE analyzed responses to one email, but the same technology works for all emails, every text message, every memo, and every Slack conversation. Federal email systems are centrally managed, workplace platforms are deployed government-wide, and because Llama is open source, Meta can’t refuse to have its systems used in this way. And because federal employees have limited privacy expectations in their work communications, the Fourth Amendment permits most government surveillance. 

Monitoring is just the beginning. The real transformation comes from training AI on presidential preferences. The training data is everywhere: campaign speeches, policy statements, social media, executive orders, signing statements, tweets, all continuously updated. The result is an algorithmic representation of the president’s priorities. Call it TrumpGPT.

Deploy that model throughout the executive branch and you can route every memo through the AI for alignment checks, screen every agenda for presidential priorities, and evaluate every recommendation against predicted preferences. The president’s desires become embedded in the workflow itself.

But it goes further. AI can generate presidential opinions on issues the president never considered. Traditionally, even the wonkiest of presidents have had enough cognitive bandwidth for only 20, maybe 30 marquee issues—immigration, defense, the economy. Everything else gets delegated to bureaucratic middle management.

But AI changes this. The president can now have an “opinion” on everything. EPA rule on wetlands permits? The AI cross-references it with energy policy. USDA guidance on organic labeling? Check against agricultural priorities. FCC decision on rural broadband? Align with public statements on infrastructure. The president need not have personally considered these issues; it’s enough that the AI learned the president’s preferences and applies them. And if you’re worried about preference drift, just keep the model accurate through a feedback loop, periodically sampling a few decisions and validating them with the president.

And here’s why this matters: Once the president achieves AI-enabled control over the executive branch, all the other mechanisms become far more powerful. When emergency powers are invoked, the president can now deploy that authority systematically across every agency simultaneously through AI systems. Perfect enforcement becomes truly universal when presidential priorities are embedded algorithmically throughout government. Information dominance operates at massive scale when all executive branch communications are coordinated through shared AI frameworks. And inscrutable national security decisions multiply when every agency can act at machine speed under algorithmic control. Each mechanism reinforces the others.

Now, this might all sound like dystopian science fiction. But here’s what’s particularly disturbing: This AI-enabled control actually fulfills the Supreme Court’s vision of the unitary executive theory. It’s the natural synthesis of a 21st-century technology meeting this Court’s interpretation of an 18th-century document. Let me show you what I mean by taking the Court’s own reasoning seriously.

In Free Enterprise Fund v. PCAOB in 2010, the Court wrote: “The Constitution requires that a President chosen by the entire Nation oversee the execution of the laws.” And in Seila Law a decade later: “Only the President (along with the Vice President) is elected by the entire Nation.”

The argument goes like this: The president has unique democratic legitimacy as the only official elected by all voters. Therefore the president should control the executive branch. This is not actually a good argument, but let’s accept the Court’s logic for a moment.

If the president is the uniquely democratic voice that should oversee execution of all laws, then what’s wrong with an AI system that replicates presidential preferences across millions of decisions? Isn’t that the apogee of democratic accountability? Every bureaucratic decision aligned with the preferences of the only official chosen by the entire nation?

This is the unitary executive theory taken to its absurd, yet logical, conclusion.

Solutions

Let’s review. We’ve examined five mechanisms concentrating presidential power: emergency powers creating permanent crisis, perfect enforcement eliminating discretion, information dominance saturating discourse, the national security black box too opaque and fast for oversight, and AI making the unitary executive technologically feasible. Together they create an executive too fast, too complex, too comprehensive, and too powerful to constrain. 

So what do we do? Are there legal or institutional responses that could restrain the Unitary Artificial Executive before it fully materializes? 

Look, my job as an academic is to spot problems, not fix them. But it seems impolite to leave you all with a sense of impending doom. So—acknowledging that I’m more confident in the diagnosis than the prescription—let me offer some potential responses.

But before I do, let me be clear: Although I’ve spent the past half hour on doom and gloom, I’m the farthest thing from an AI skeptic. AI can massively improve government operations through faster service, better compliance, and reduced bias. At a time when Americans believe government is dysfunctional, AI offers real solutions. The question isn’t whether to use AI in government. We will, and we should. The question is how to capture these benefits while preventing unchecked concentration of power.

Legislative Solutions

Let’s start with legislative solutions. Congress could, for example, require congressional authorization before the executive branch deploys high-capability AI systems. It could limit emergency declarations to 30 or 60 days without renewal. And it could require explainable decisions with a human-in-the-loop for critical determinations.

But the challenges are obvious. Any president can veto restrictions on their own power, and in our polarized age it’s very hard to imagine a veto-proof majority. The president also controls how the laws are executed, so statutory requirements could be interpreted narrowly or ignored. Classification could shield AI systems from oversight. And “human-in-the-loop” requirements could become mere rubber-stamping.

Institutional and Structural Reforms

Beyond statutory text, we need institutional reforms. Start with oversight: Create an independent inspector general for AI with technical experts and clearance to access classified systems. But since oversight works only if overseers understand the technology, we also need to build congressional technical capacity by restoring the Office of Technology Assessment and expanding the Congressional Research Service’s AI expertise. Courts need similar resources—technical education programs and access to court-appointed AI experts. 

We could also work through the private sector, imposing explainability and auditing requirements on companies doing AI business with the federal government. And most ambitiously, we could try to embed legal compliance directly into AI architecture itself, designing “law-following AI” systems with constitutional constraints built directly into the models.

But, again, each of these proposals faces obstacles. Inspectors general risk capture by the agencies they oversee. Technical expertise doesn’t guarantee political will—Congress and courts may understand AI but still defer to the executive. National security classification could exempt government AI systems from explainability and auditing requirements. And for law-following AI, we still need to figure out how to train a model to teach it what “following the law” actually means.

Constitutional Responses

Maybe the problem is more fundamental. Maybe we need to rethink the constitutional framework itself.

Constitutional amendments are unrealistic—the last was 1992, and partisan polarization makes the Article V process nearly impossible.

So more promising would be judicial reinterpretation of existing constitutional provisions. Courts could hold that Article II’s Vesting and Take Care Clauses don’t prohibit congressional regulation of executive branch AI. Courts could use the non-delegation doctrine to require that Congress set clear standards for AI deployment rather than giving the executive blank-check authority. And due process could require algorithmic transparency and meaningful human oversight as constitutional minimums.

But maybe the deeper problem is the unitary executive theory itself. That’s why I titled this lecture “The Unitary Artificial Executive”—as a warning that this constitutional theory becomes even more dangerous once AI makes it technologically feasible.

So here’s my provocation to my colleagues in the academy and the courts who advocate for a unitary executive: Your theory, combined with AI, leads to consequences you never anticipated and probably don’t want. The unitary executive theory values efficiency, decisiveness, and unity of command. It treats bureaucratic friction as dysfunction. But what if that friction is a feature, not a bug? What if bureaucratic slack, professional independence, expert dissent—the messy pluralism of the administrative state—are what stands between us and tyranny?

The ultimate constitutional solution may require reconsidering the unitary executive theory itself. Perfect presidential control isn’t a constitutional requirement but a recipe for autocracy once technology makes it achievable. We need to preserve spaces where the executive doesn’t speak with one mind—whether that mind is human or machine.

Conclusion

I’ve just offered some statutory approaches, institutional reforms, and constitutional reinterpretations. But let’s be honest about the obstacles: AI develops faster than law can regulate it. Most legislators and judges don’t understand AI well enough to constrain it. And both parties want presidential power when they control it. 

But lawyers have confronted existential rule-of-law challenges before. After Watergate, the Church Committee reforms led to real constraints on executive surveillance. After 9/11, when crisis and executive power claimed unchecked detention authority, lawyers fought, forcing the Supreme Court to check executive overreach. When crisis and executive power threaten constitutional governance, lawyers have been the constraint.

And, to the students in the audience, let me say: You will be too.

You’re entering the legal profession at a pivotal moment. The next decade will determine whether constitutional government survives the age of AI. Lawyers will be on the front lines of this fight. Some will work in the executive branch as the humans in the loop. Some will work in Congress—drafting statutes and demanding explanations. Some will litigate—bringing cases, performing discovery, and forcing judicial confrontation.

The Unitary Artificial Executive is not inevitable. It’s a choice we’re making incrementally, often without realizing it. The question is: Will we choose to constrain it while we still can? Or will we wake up one day to find we’ve built a constitutional autocracy—not through a coup, but through code?

This is a problem we’re still learning to see. But seeing it is the first step. And you all will determine what comes next.

Thank you. I look forward to your questions.

The limits of regulating AI safety through liability and insurance

Any opinions expressed in this post are those of the authors and do not reflect the views of the Institute for Law & AI.

At the end of September, California governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act, S.B. 53, requiring large AI companies to report the risks associated with their technology and the safeguards they have put in place to protect against those risks. Unlike an earlier version of the bill, S.B. 1047, that Newsom vetoed a year earlier, this most recent version doesn’t focus on assigning liability to companies for harm caused by their AI systems. In fact, S.B. 53 explicitly limits financial penalties to $1 million for major incidents that kill more than 50 people or cause more than $1 billion in damage. 

This de-emphasizing of liability is deliberate—Democratic state Sen. Scott Wiener said in an interview with NBC News, “Whereas SB 1047 was more of a liability-focused bill, SB 53 is more focused on transparency.” But that’s not necessarily a bad thing. In spite of a strong push to impose greater liability on AI companies for the harms their systems cause, there are good reasons to believe that stricter liability rules for AI won’t make many types of AI systems safer and more secure. In a new paper, we argue that liability is of limited value in safeguarding against many of the most significant AI risks. The reason is that liability insurers, who would ordinarily help manage and price such risks, are unlikely to be able to model them accurately or to induce their insureds to take meaningful steps to limit exposure.

Liability and Insurance

Greater liability for AI risks will almost certainly result in a much larger role for insurers in providing companies with coverage for that liability. This, in turn, would make insurers one of the key stakeholders determining what type of AI safeguards companies must put in place to qualify for insurance coverage. And there’s no guarantee that insurers will get that right. In fact, when insurers sought to play a comparable role in the cybersecurity domain, their interventions proved largely ineffective in reducing policyholders’ overall exposure to cyber risk. And many of the challenges that insurers encountered in pricing and affirmatively mitigating cyber risk are likely to be even more profound when it comes to modeling and pricing many of the most significant risks associated with AI systems.

AI systems present a wide range of risks, some of which insurers may indeed be well equipped to manage. For example, insurers may find it relatively straightforward to gather data on car crashes involving autonomous vehicles and to develop reasonably reliable predictive models for such events. But many of the risks associated with generative and agentic AI systems are far more complex, less observable, and more heterogeneous, making it difficult for insurers to collect data, design effective safeguards, or develop reliable predictive models. These risks run the gamut from chatbots failing to alert anyone about a potentially suicidal user to giving customers incorrect advice and prices, to agents that place unwanted orders for supplies or services, develop malware that can be used to attack computer systems, or transfer funds incorrectly. For these types of risks—as well as more speculative potential catastrophic risks, such as AIs facilitating chemical or biological attacks—there is probably not going to be a large set of incidents that insurers can observe to build actuarial models, much less a clear consensus on how best to guard against them.

We know, from watching insurers struggle with how best to mitigate cyber risks, that when there aren’t reliable data sources for risks, or clear empirical evidence about how best to address those risks, it can be very difficult for insurers to play a significant role in helping policyholders do a better job of reducing their risk. When it comes to cyber risk, there have been several challenges that will likely apply as much—if not more—to the risks posed by many of today’s rapidly proliferating AI systems.

Lack of data

The first challenge that stymied insurers’ efforts to model cyber risks was simply a lack of good data about how often they occur and how much they cost. Other than breaches of personal data, organizations have historically not been required to report most cybersecurity incidents, though that is changing with the upcoming implementation of the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA). Since they weren’t required to report incidents like ransomware, cyber-espionage, and denial-of-service attacks, most organizations didn’t for fear of harming their reputation or inviting lawsuits and regulatory scrutiny. But because so many cybersecurity incidents were kept under wraps, insurers had a hard time when they began offering cyber insurance coverage figuring out how frequently these incidents occurred and what kinds of damage they typically caused. That’s why most cyber insurance policies were initially just data breach insurance—because there was at least some data on those breaches which were required to be reported under state laws. 

Even as their coverage expanded to include other types of incidents besides data breaches, and insurers built up their own claims data sets, they still encountered challenges in predicting cybersecurity incidents because the threat landscape was not static. As attackers changed their tactics and adapted to new defenses, insurers found that the past trends were not always reliable indicators of what future cybersecurity incidents would look like. Most notably, in 2019 and 2020, insurers experienced a huge spike in ransomware claims that they had not anticipated, leading them to double and triple premiums for many policyholders in order to keep pace with the claims they faced.

Many AI incidents, like cybersecurity incidents, are not required by law to be reported and are therefore probably not made public. This is not uniformly true of all AI risks, of course. For instance, car crashes and other incidents with visible, physical consequences are very public and difficult—if not impossible—to keep secret. For these types of risks, especially if they occur at a high enough frequency to allow for the collection of robust data sets, insurers may be able to build reliable predictive models. However, many other types of risks associated with AI systems—including those linked to agentic and generative AI—are not easily observable by the outside world. And in some cases, it may be difficult, or even impossible, to know what role AI has played in an incident. If an attacker uses a generative AI tool to identify a software vulnerability and write malware to exploit that vulnerability, for instance, the victim and their insurer may never know what role AI played in the incident. This means that insurers will struggle to collect consistent or comprehensive historic data sets about these risks.

AI risks may, too, change over time, just as cyber risks do. Here, again, this is not equally true of all AI risks. While cybersecurity incidents almost always involve some degree of adversarial planning—an attacker trying to compromise a computer system and adapting to safeguards and new technological developments—the same is not true of all AI incidents, which can result from errors or limitations in the technology itself, not necessarily any deliberate manipulation. But there are deliberate attacks on AI systems that insurers may struggle to predict using historical data—and even the incidents that are accidental rather than malicious may change and evolve considerably over time given how quickly AI systems are changing and being applied to new areas. All of these challenges point to the likelihood that insurers will have a hard time modeling these types of AI risks and will therefore struggle to price them, just as they have with cyber risks.

Difficulty of Risk Assessments

Another major challenge insurers have encountered in the cyber insurance industry is how to assess whether a company has done a good job of protecting itself against cyber threats. The industry standard for these assessments are long questionnaires that companies fill out about their security posture but that often fail to capture the key technical nuances about how safeguards like encryption and multi-factor authentication are implemented and configured. This makes it difficult for insurers to link premiums to their policyholders’ risk exposure because they don’t have any good way of measuring that risk exposure. So instead, most premiums are set according to how much revenue a company generates or its industry sector. This means that companies often aren’t rewarded for investing in more security safeguards with lower premiums and therefore have little incentive to make those investments.

A similar—and arguably greater—challenge exists for assessing organizations’ exposure to AI risks. AI risks are so varied and AI systems are so complex that identifying all of the relevant risks and auditing all of the technical components and code related to those risks requires technical experts that most insurers are unlikely to have in-house. While insurers may try partnering with tech firms to perform these assessments—as they have in the past for cybersecurity assessments—they will also probably face pressure from brokers and clients to keep the assessment process lightweight and non-intrusive to avoid losing customers to their competitors. This has certainly been the case in the cyber insurance market, where many carriers continue to rely on questionnaires instead of other, more accurate assessment methods in order to avoid upsetting their clients. 

But if insurers can’t assess their customers’ risk exposure, then they can’t help drive down that risk by rewarding the firms who have done the most to reduce their risk with lower premiums. To the contrary, this method of measuring and pricing risk signals to insureds that investments in risk mitigation are not worthwhile, since such efforts have little effect on premiums and primarily benefit insurers by reducing their exposure. This is yet another reason to be cautious about the potential for insurers to help make AI systems safer and more secure.

Uncertainty About Risk Mitigation Best Practices

Figuring out how to assess cyber risk exposure is not the only challenge insurers encountered when it came to underwriting cyber insurance. They also struggled with figuring out what safeguards and security controls they should demand of their policyholders. While many insurers require common controls like encryption, firewalls, and multi-factor authentication, they often lack good empirical evidence about which of these security measures are most effective. Even in their own claims data sets, insurers don’t always have reliable information about which safeguards were or were not in place when incidents occurred, because the very lawyers insurers supply to oversee incident investigations sometimes don’t want that information recorded or shared for fear of it being used in any ensuing litigation.

The uncertainty about which best practices insurers should require from their customers is even greater when it comes to measures aimed at making many types of AI systems safer and more secure. There is little consensus about how best to do that beyond some broad ideas about audits, transparency, testing, and red teaming. If insurers don’t know which safeguards or security measures are most effective, then they may not require the right ones, further weakening their ability to reduce risk for their policyholders.

Catastrophic Risk

One final characteristic that AI and cyber risks share is the potential for really large-scale, interconnected incidents, or catastrophic risks, that will generate more damage than insurers can cover. In cyber insurance, the potential for catastrophic risks stems in part from the fact that all organizations rely on a fairly centralized set of software providers, cloud providers, and other computing infrastructure. This means that an attack on the Windows operating system, or Amazon Web Services, could cause major damage to an enormous number of organizations in every country and spanning every industry sector, creating potentially huge losses for insurers since they would have no way to meaningfully diversify their risk pools. This has led to cyber insurers and reinsurers being relatively cautious in how much cyber risk they underwrite and maintaining high deductibles for these policies. 

AI foundation models and infrastructure are similarly concentrated in a small number of companies, indicating that there is similar potential for an incident targeting one model to have far-reaching consequences. Future AI systems may also pose a variety of catastrophic risks, such as the possibility of these systems turning against humans or causing major physical accidents. Such catastrophic risks pose particular challenges for insurers and can make them more wary of offering large policies, which may in turn make some companies discount these risks entirely notwithstanding the prospect of liability. 

Liability Limitation or Risk Reduction?

In general, the cyber insurance example suggests that when it comes to dealing with risks for which we do not have reliable data sets, cannot assess firms’ risk levels, do not know what the most effective safeguards are, and have some potential for catastrophic consequences, insurers will end up helping their customers limit their liability but not actually reduce their risk exposure. For instance, in the case of cyber insurance, this may mean involving lawyers early in the incident response process so that any relevant information is shielded against discovery in future litigation—but not actually meaningfully changing the preventive security controls firms have in place to make incidents less likely to occur. 

It is easy to imagine that imposing greater liability on AI companies could produce a similar outcome, where insurers intervene to help reduce that liability—perhaps by engaging legal counsel or mandating symbolic safeguards aimed at minimizing litigation or regulatory exposure—without meaningfully improving the safety or security of the underlying AI systems. That’s not to say insurers won’t play an important role in covering certain types of AI risks, or in helping pool risks for new types of AI systems. But it does suggest they will be able to do little to incentivize tech companies to put better safeguards in place for many of their AI systems.

That’s why California is wise to be focusing on reporting and transparency rather than liability in its new law. Requiring companies to report on risks and incidents can help build up data sets that enable insurers and governments to do a better job of measuring risks and the impact of different policy measures and safeguards. Of course, regulators face many of the same challenges that insurers do when it comes to deciding which safeguards to require for high-risk AI systems and how to mitigate catastrophic risks. But at the very least, regulators can help build up more robust data sets about the known risks associated with AI, the safeguards that companies are experimenting with, and how well they work to prevent different types of incidents. 

That type of regulation is badly needed for AI systems, and it would be a mistake to assume that insurers will take on the role of data collection and assessment themselves, when we have seen them try and fail to do that for more than two decades in the cyber insurance sector. The mandatory reporting for cybersecurity incidents that will go into effect next year under CIRCIA could have started twenty years ago if regulators hadn’t assumed that the private sector—led by insurers—would be capable of collecting that data on its own. And if it had started twenty years ago, we would probably know much more than we do today about the cyber threat landscape and the effectiveness of different security controls—information that would itself lead to a stronger cyber insurance industry. 

If regulators are wise, they will learn the lessons of cyber insurance and push for these types of regulations early on in the development of AI rather than focusing on imposing liability and leaving it in the hands of tech companies and insurers to figure out how best to shield themselves from that liability. Liability can be useful for dealing with some AI risks, but it would be a mistake not to recognize its limits when it comes to making emerging technologies safer and more secure.

Building AI surge capacity: mobilizing technical talent into government for AI-related national security crises

OUP book: Architectures of global AI governance

Unbundling AI openness

Why give AI agents actual legal duties?

The core proposition of Law-Following AI (LFAI) is that AI agents should be designed to refuse to take illegal actions in the service of their principals. However, as Ketan and I explain in our writeup of LFAI for Lawfare, this raises a significant legal problem: 

[A]s the law stands, it is unclear how an AI could violate the law. The law, as it exists today, imposes duties on persons. AI agents are not persons, and we do not argue that they should be. So to say “AIs should follow the law” is, at present, a bit like saying “cows should follow the law” or “rocks should follow the law”: It’s an empty statement because there are at present no applicable laws for them to follow.

Let’s call this the Law-Grounding Problem for LFAI. LFAI requires defining AI actions as either legal or illegal. The problem arises because courts generally cannot reason about the legality of actions taken by an actor without some sort of legally recognized status, and AI systems currently lack any such status.[ref 1]

In the LFAI article, we propose solving the Law-Grounding Problem by making AI agents “legal actors”: entities on which the law actually imposes legal duties, even if they have no legal rights. This is explained and defended more fully in Part II of the article. Let’s call this the Actual Approach to the Law-Grounding Problem.[ref 2] Under the Actual Approach, claims like “that AI violated the Sherman Act” are just as true within our legal system as claims like “Jane Doe violated the Sherman Act.”

There is, however, another possible approach that we did not address fully in the article: saying that an AI agent has violated the law if it took an action that, if taken by a human, would have violated the law.[ref 3] Let’s call this the Fictive Approach to the Law-Grounding Problem. Under the Fictive Approach, claims like “that AI violated the Sherman Act” would not be true in the same way that statements like “Jane Doe violated the Sherman Act.” Instead, statements like “that AI violated the Sherman Act” would be, at best, a convenient shorthand for statements like “that AI took an action that, if taken by a human, would have violated the Sherman Act.”

I will argue that the Actual Approach is preferable to the Fictive Approach in some cases.[ref 4] Before that, however, I will explain why someone might be attracted to the Fictive Approach in the first place.

Motivating the Fictive Approach

To say that something is fictive is not to say that it is useless; legal fictions are common and useful. The Fictive Approach to the Law-Grounding Problem has several attractive features.

The first is its ease of implementation: the Fictive Approach does not require any fundamental rethinking of legal ontology. We do not need to either grant AI agents legal personhood or create a new legal category for them.

The Fictive Approach might also track common language use: when people make statements like “Claude committed copyright infringement,” they probably mean it in the fictive sense. 

Finally, the Fictive Approach also mirrors how we think about similar problems, like immunity doctrines. The King of England may be immune from prosecution, but we can nevertheless speak intelligibly of his actions as lawful or unlawful by analyzing what the legal consequences would be if he were not immune.

Why prefer the Actual Approach?

Nevertheless, I think there are good reasons to prefer the Actual Approach over the Fictive Approach.

Analogizing to Humans Might Be Difficult

The strongest reason, in my opinion, is that AI agents may “think” and “act” very differently from humans. The Fictive Approach requires us to take a string of actions that an AI did and ask whether a human who performed the same actions would have acted illegally. The problem is that AI agents can take actions that could be very hard for humans to take, and so judges and jurors might struggle to analyze the legal consequences of a human doing the same thing. 

Today’s proto-agents are somewhat humanlike in that they receive instructions in natural language, use computer tools designed for humans, reason in natural language, and generally take actions serially at approximately human pace and scale. But we should not expect this paradigm to last. For example, AI agents might soon:

And these are just some of the most foreseeable; over time, AI agents will likely become increasingly alien in their modes of reasoning and action. If so, then the Fictive Approach will become increasingly strained: judges and jurors will find themselves trying to determine whether actions that no human could have taken would have violated the law if performed by a human. At a minimum, this would require unusually good analogical reasoning skills; more likely, the coherence of the reasoning task would break down entirely.

Developing Tailored Laws and Doctrines for AIs

LFAI is motivated in large part by the belief that AI agents that are aligned to “a broad suite of existing laws”[ref 5] would be much safer than AI agents unbound by existing laws. But new laws specifically governing the behavior of AI agents will likely be necessary as AI agents transform society.[ref 6] However, the Fictive Approach would not be effective for new AI-specific laws. Recall that the Fictive Approach says that an action by an AI agent violates a law just in the case that a human who took that action would have violated that law. But if the law in question would only apply to an AI agent, the Fictive Approach cannot be applied: a human could not violate the law in question. 

Relatedly, we may wish to develop new AI-specific legal doctrines, even for laws that apply to both humans and AIs. For example, we might wish to develop new doctrines for applying existing laws with a mental state component to AI agents.[ref 7] Alternatively, we may need to develop doctrines for determining when multiple instances of the same (or similar) AI models should be treated as identical actors. But the Fictive Approach is in tension with the development of AI-specific doctrines, since the whole point of the Fictive Approach is precisely to avoid reasoning about AI systems in their own right.

These conceptual tensions may be surmountable. But as a practical matter, a legal ontology that enables courts and legislatures to actually reason about AI systems in their own right seems more likely to lead to nuanced doctrines and laws that are responsive to the actual nature of AI systems. The Fictive Approach, by contrast, encourages courts and legislatures to attempt to map AI actions onto human actions, which may thereby overlook or minimize the significant differences between humans and AI systems.

Grounding Respondeat Superior Liability

Some scholars propose using respondeat superior to impose liability on the human principals of AI agents for any “torts” committed by the latter.[ref 8] However, “[r]espondeat superior liability applies only when the employee has committed a tort. Accordingly, to apply respondeat superior to the principals of an AI agent, we need to be able to say that the behavior of the agent was tortious.”[ref 9] We can only say that the behavior of an AI agent was truly tortious if it had a legal duty to violate. The Actual Approach allows for this; the Fictive Approach does not.

Of course, another option is simply to use the Fictive Approach for the application of respondeat superior liability as well. However, the Actual Approach seems preferable insofar as it doesn’t require this additional change. More generally, precisely because the Actual Approach integrates AI systems into the legal system more fully, it can be leveraged to parsimoniously solve problems in areas of law beyond LFAI.

In the LFAI article, we take no position as to whether AI agents should be given legal personhood: a bundle of duties and rights.[ref 10] However, there may be good reasons to grant AI agents some set of legal rights.[ref 11] 

Treating AI agents as legal actors under the Actual Approach creates optionality with respect to legal personhood: if the law recognizes an entity’s existence and imposes duties on it, it is easier for the law to subsequently grant that entity rights (and therefore personhood). But, we argue, the Actual Approach creates no obligation to do so:[ref 12] the law can coherently say that an entity has duties but no rights. Since it is unclear whether it is desirable to give AIs rights, this optionality is desirable. 

*      *      *

AI companies[ref 13] and policymakers[ref 14] are already tempted to impose legal duties on AI systems. To make serious policy progress towards this, they will need to decide whether to actually do so, or merely use “lawbreaking AIs” as shorthand for some strained analogy to lawbreaking humans. Choosing the former path—the Actual Approach—is simpler and more adaptable, and therefore preferable.