The Unitary Artificial Executive

Editor’s note: The following are remarks delivered on October 23, 2025, at the University of Toledo Law School’s Stranahan National Issues Forum. Watch a recording of the address here. This transcript was originally posted at Lawfare.

Good afternoon. I’d like to thank Toledo Law School and the Stranahan National Issues Forum for the invitation to speak with you today. It’s an honor to be part of this series.

In 1973, the historian Arthur Schlesinger Jr., who served as a senior adviser in the Kennedy White House, gave us “The Imperial Presidency,” documenting the systematic expansion of unilateral presidential power that began with Washington and that Schlesinger was chronicling in the shadow of Nixon and Watergate. Each administration since then, Democrat and Republican alike, has argued for expansive executive authorities. Ford. Carter. Reagan. Bush 1. Clinton. Bush 2. Obama. The first Trump administration. Biden. And what we’re watching now in the second Trump administration is breathtaking.

This pattern of ever-expanding executive power has always been driven partly by technology. Indeed, through human history, transformative technologies drove large-scale state evolution. Agriculture made populations large enough for taxation and conscription. Writing enabled bureaucratic empires across time and distance. The telegraph and the railroad annihilated space, centralizing control over vast territories. And computing made the modern administrative state logistically possible. 

For American presidents specifically, this technological progression has been decisive. Lincoln was the first “wired president,” using the telegraph to centralize military command during the Civil War. FDR, JFK, and Reagan all used radio and then television to “go public” and speak directly to the masses. Trump is the undisputed master of social media.

I’ve come here today to tell you: We haven’t seen anything yet.

Previous expansions of presidential power were still constrained by human limitations. But artificial intelligence, or AI, eliminates those constraints—producing not incremental growth but structural transformation of the presidency. In this lecture I want to examine five mechanisms through which AI will concentrate unprecedented authority in the White House, turning Schlesinger’s “Imperial Presidency” into what I call the “Unitary Artificial Executive.” 

The first mechanism is the expansion of emergency powers. AI crises—things like autonomous weapons attacks or AI-enabled cybersecurity breaches—justify broad presidential action, exploiting the same judicial deference to executive authority in emergencies that courts have shown from the Civil War through 9/11 to the present. 

Second, AI enables perfect enforcement through automated surveillance and enforcement mechanisms, eliminating the need for the prosecutorial discretion that has always limited executive power. 

The third mechanism is information dominance. AI-powered messaging can saturate the public sphere through automated propaganda and micro-targeted persuasion, overwhelming the marketplace of ideas.

Fourth, AI in national security creates what scholars call the “double black box”—inscrutable AI nested inside national security secrecy. And when these inscrutable systems operate at machine speed, oversight becomes impossible. Cyber operations and autonomous weapons engagements complete in milliseconds—too fast and too opaque for meaningful oversight.

And fifth—and most dramatically—AI can finally realize the vision of the unitary executive. By that I mean something specific: not just a presidency with broad substantive authorities, but one that exerts complete, centralized control over executive branch decision-making. AI can serve as a cognitive proxy throughout the executive branch, injecting presidential preferences directly into algorithmic decisions, making unitary control technologically feasible for the first time.

These five mechanisms operate in two different ways. The first four expand the practical scope of presidential authority—emergency powers, enforcement, information control, and national security operations. They expand what presidents can do. The fifth mechanism is different. It’s about control. It determines how those powers are exercised. And the combination of these two creates an unprecedented concentration of power.

My argument is forward-looking, but it’s not speculative. From a legal perspective, these mechanisms build on existing presidential powers and fit comfortably within current constitutional doctrine. From a technological perspective, none of this requires artificial superintelligence or even artificial general intelligence. All of these capabilities are doable with today’s tools, and certainly achievable within the next few years.

Now, before we go further, let me tell you where I’m coming from. My academic career has focused on two research areas: first, the regulation of emerging technology, and, second, executive power. Up until now, these have been largely separate. This lecture brings those two tracks together.

But I also have some practical experience that’s relevant to this project. Before becoming a law professor, I was a junior policy attorney in the National Security Division at the Department of Justice. In other words, I was a card-carrying member of what the current administration calls the “deep state.”

One thing I learned is that the federal bureaucracy is very hard to govern. Decision-making is decentralized, information is siloed, civil servants have enormous autonomy—not so much because of their formal authority but because governing millions of employees is, from a practical perspective, impossible. That practical ungovernability is about to become governable.

Together with Nicholas Bednar, my colleague at the University of Minnesota Law School, I’ve been researching how this transformation might happen—and what it means for constitutional governance. This lecture is the first draft of the research we’ve been conducting.

So let’s jump in. To understand how the five mechanisms of expanded presidential power will operate—and why they’re not speculative—we need to start with AI’s actual capabilities. So what can AI actually do today, and what will it be able to do in the near future?

What Can AI Actually Do?

Again, I’m not talking about artificial general intelligence or superintelligence—those remain speculative, possibly decades away. I’m talking about today’s capabilities, including technology that is right now deployed in government systems. 

It’s helpful to think of AI as a pipeline with three stages: collection, analysis, and execution.

The first stage is data collection at scale. The best AI-powered facial recognition achieves over 99.9 percent accuracy and Clearview AI—used by federal and state law enforcement—has over 60 billion images. The Department of Defense’s Project Maven—an AI-powered video analysis program—demonstrates the impact: 20 people using AI now replicate what required 2,000. That’s a 100-fold increase in efficiency.

The second stage is data analysis. AI analyzes data at scales humans cannot match. FINRA—the financial industry self-regulator—processes 600 billion transactions daily using algorithmic surveillance, a volume that would require an army of analysts. FBI algorithms assess thousands of tip line calls a day for threat level and credibility. Systems like those from the technology company Palantir integrate databases across dozens of agencies in real time. All this analysis happens continuously, comprehensively, and faster than human oversight.

The third stage is automated execution, which operates at speeds and scales outstripping human capabilities. For example, DARPA’s AI-controlled F-16 has successfully engaged human pilots in mock dogfights, demonstrating autonomous combat capability. And the federal cybersecurity agency’s autonomous systems block more than a billion suspicious network connection requests across the federal government every year.

To summarize: AI can sense everything, process everything, and act on everything—all at digital speed and scale.

These are today’s capabilities—not speculation about future AI. But they’re also just the baseline. And they’re scaling up dramatically—driven by two forces. 

The first driver is the internal trajectory of AI itself. Training compute—the processing power used to build AI systems—has increased four to five times per year since 2010. Epoch AI, a research organization tracking AI progress, projects that frontier AI models will use thousands of times more compute than OpenAI’s GPT-4 by 2030, with training clusters costing over $100 billion. 

What will this enable? By 2030 at the latest, AI should be capable of building large-scale software projects, producing advanced mathematical proofs, and engaging in multi-week autonomous research. In government, that means AI systems that don’t just analyze but execute complete, large-scale tasks from start to finish. 

The second driver of AI advancement is geopolitical competition. China’s 2017 AI Development Plan targets global leadership by 2030, backed by massive state investment. They’ve deployed generative AI news anchors and built the nationwide Skynet video surveillance system—and yes, they actually called it that. China’s technical capabilities are advancing rapidly—the DeepSeek breakthrough earlier this year demonstrated that Chinese researchers can match or exceed Western AI performance, often at a fraction of the cost.

In today’s polarized Washington, there’s only one thing Democrats and Republicans agree on: China is a threat that must be confronted. That consensus is driving much of AI policy. So it’s unsurprising that the administration’s recent AI Action Plan frames the U.S. response as seeking “unquestioned … technological dominance.” Federal generative AI use cases have increased ninefold in one year, and the Defense Department awarded $800 million in AI contracts this past July. The department has also established detailed procedures for developing autonomous lethal weapons, reflecting the Pentagon’s assumption that such systems are the future. 

It’s easy to see how this competitive dynamic could be used to justify concentrating AI in the executive branch. “We can’t afford congressional delays. Transparency would give adversaries advantages. Traditional deliberation is incompatible with the speed of AI development.” The AI arms race could easily become a permanent emergency justifying rapid deployment.

Five Mechanisms Through Which AI Concentrates Presidential Power

So those are the drivers of AI progress—rapidly advancing capabilities and geopolitical pressure. Now let’s examine the five distinct mechanisms through which these forces will actually concentrate presidential power.

Mechanism 1: Emergency Powers

Presidential emergency powers rest on two sources with deep historical roots. The first is inherent presidential authority under Article II. For example, during the Civil War, Lincoln blockaded Southern ports, increased the army, and spent unauthorized funds, all claiming inherent constitutional authority as commander in chief.

The second source of emergency powers are explicit congressional delegations. When FDR closed every bank in March 1933, he did so under the Trading with the Enemy Act. After 9/11, Congress passed an Authorization for Use of Military Force—still in effect two decades later and the source of ongoing military operations across multiple continents. Today the presidency operates under more than 40 continuing national emergencies. For example, Trump has invoked the International Emergency Economic Powers Act (IEEPA) to impose many of his ongoing tariffs, declaring trade imbalances a national security emergency.

With both sources, courts usually defer. From the Prize Cases upholding Lincoln’s Southern blockade through Korematsu affirming Japanese internment to Trump v. Hawaii permitting the first Trump administration’s Muslim travel bans, the Supreme Court has generally granted presidents extraordinary latitude during emergencies. There are of course exceptions—Youngstown and the post-9/11 cases like Hamdi and Boumediene being the most famous—but the pattern is clear: When the president invokes national security or emergency powers, judicial review is limited. 

So what has constrained emergency powers? The emergencies themselves. Throughout history, emergencies were rare and time limited—the Civil War, the Great Depression, Pearl Harbor, 9/11. Wars ended, and crises receded. Our separation-of-powers framework has worked because it assumes emergencies have generally been the temporary exception, not the norm.

AI breaks this assumption.

AI empowers adversaries asymmetrically—giving offensive capabilities that outpace defensive responses. Foreign actors can use AI to identify vulnerabilities, automate attacks, and target critical infrastructure at previously impossible scale and speed. The same AI capabilities that strengthen the president also strengthen our adversaries, creating a perpetual heightened threat that justifies permanent emergency powers. 

Here’s what an AI-enabled emergency might look like. A foreign adversary uses AI to target U.S. critical infrastructure—things like the power grid, financial systems, or water treatment. Within hours, the president invokes IEEPA, the Defense Production Act, and inherent Article II authority. AI surveillance monitors all network traffic. Algorithmic screening begins for financial transactions. And compliance monitoring extends across critical infrastructure.

The immediate crisis might pass in 48 hours, but the emergency infrastructure never gets dismantled. Surveillance remains operational, and each emergency builds infrastructure for the next one.

Why does our constitutional system permit this? First, speed: Presidential action completes before Congress can react. Second, secrecy: Classification shields details from Congress, courts, and the public. Third, judicial deference: Courts defer almost automatically when “national security” and “emergency” appear in the same sentence. And, as if to add insult to injury, the president’s own AI systems might soon be the ones assessing threats and determining what counts as an emergency.

Mechanism 2: Perfect Enforcement

Emergency powers are—theoretically, at least—episodic. But enforcement of the laws happens continuously, every day, in every interaction between citizen and state. That’s where the second mechanism—perfect enforcement—operates.

Pre-AI governance depends on enforcement discretion. We have thousands of criminal statutes and millions of regulations, and so, inevitably, prosecutors have to choose cases, agencies have to prioritize violations, and police have to exercise judgment. The Supreme Court has recognized this necessity: In cases like Heckler v. ChaneyBatchelder, and Wayte, the Court held that non-enforcement decisions are presumptively unreviewable because agencies must allocate scarce resources. This discretion prevents tyranny by allowing mercy, context, and human judgment. 

AI eliminates that necessity. When every violation can be detected and every rule can be enforced, enforcement discretion becomes a choice rather than a practical constraint. The question becomes: What happens when the Take Care Clause meets perfect enforcement? Does the Take Care Clause allow the president to enforce the laws to the hilt? Might it require him to? 

As an example, consider what perfect immigration enforcement might look like. (And you can imagine this across every enforcement domain: tax compliance, environmental violations, workplace safety—even traffic laws.) Already facial recognition databases cover tens of millions of Americans, real-time camera networks monitor movement, financial systems track transactions, social media analysis identifies patterns, and automated risk assessment scores individuals. Again, China is leading the way—its “social credit” system demonstrates what’s possible when these technologies are integrated.

Now imagine the president directs DHS to do the same: build a single AI system that identifies every visa overstay and automatically generates enforcement actions. There are no more “enforcement priorities”—the algorithm flags everyone, and ICE officers blindly execute its millions of directives with perfect consistency.

Why does the Constitution allow this? The Take Care Clause traditionally required discretion because resource limits made total enforcement impossible. But AI changes this. Now the Take Care Clause can be read as consistent with eliminating discretion—the president isn’t violating his duty by enforcing everything, he’s just being thorough.

More aggressively: The president might argue that perfect enforcement is not just permitted but required. Congress wrote these laws, and the president is merely faithfully executing what Congress commanded now that technology makes it possible. If there’s no resource constraint, there’s no justification for discretion.

What about Equal Protection or Due Process? The Constitution might actually favor algorithmic enforcement. Equal Protection could be satisfied by perfect consistency if algorithmic enforcement treats identical violations identically, eliminating the arbitrary disparities that plague human judgment. And Due Process might be satisfied if AI proves more accurate than humans, which it may well be. Power once dispersed among millions of fallible officials becomes concentrated in algorithmic policy that could, compared to the human alternative, be more consistent, more accurate, and more just.

There’s one final effect that perfect enforcement produces: It ratchets up punishment beyond congressional intent. Congress wrote laws assuming enforcement discretion would moderate impact. They set harsh penalties knowing prosecutors would focus on serious cases and agencies would prioritize egregious violations, while minor infractions would largely be ignored.

But AI removes that backdrop. When every violation is enforced—even trivial ones Congress never expected would be prosecuted—the net effect is dramatically higher punitiveness. Congress calibrated the system assuming discretion would filter out minor cases. AI enforces everything, producing an aggregate severity Congress never intended.

Mechanism 3: Information Dominance

The first two mechanisms concentrating presidential power—emergency powers and perfect enforcement—expand what the president can do. The third mechanism is about controlling what citizens know. AI enables the president to saturate public discourse at unprecedented scale. And if the executive controls what citizens see, hear, and believe, how can Congress, courts, or the public effectively resist?

The Supreme Court has held that the First Amendment doesn’t restrict the government’s own speech. This government speech doctrine means that the government can select monumentschoose license plate messages, communicate preferred policies—all with no constitutional limit on volume, persistence, or sophistication.

Until now, practical constraints limited the scale of this speech—more messages required more people, more time, and more resources. AI eliminates these constraints, enabling content generation at near-zero marginal cost, operating across all platforms simultaneously, and delivering personalized messages to every citizen. The government speech doctrine never contemplated AI-powered saturation, and there is no limiting principle in existing case law.

Again, look to China for the future—it’s already using AI to saturate public discourse. In August, leaked documents revealed that GoLaxy, a Chinese AI company, built a “Smart Propaganda System”—AI that monitors millions of posts daily and generates personalized counter-messaging in real time, producing content that “feels authentic … and avoids detection.” The Chinese government has used it to suppress Hong Kong protest movements and influence Taiwanese elections. 

Now imagine an American president deploying these capabilities domestically.

It’s 2027. A major presidential scandal breaks—Congress investigates, courts rule executive actions unconstitutional, and in response the Presidential AI Response System activates. It floods social media platforms, news aggregators, and recommendation algorithms with government-generated content.

You’re a suburban Ohio parent worried about safety, and your phone shows AI-generated content about how the congressional investigation threatens law enforcement funding, accompanied by fake “local crime statistics.” Your neighbor, a student at the excellent local law school, is concerned about civil liberties—she sees completely different content about “partisan witch hunts” undermining due process. Same scandal, different narratives—the public can’t even agree on basic facts.

The AI system operates in three layers. First, it generates personalized messaging, detecting which demographics are persuadable and which narratives are gaining traction, A/B testing and adjusting counter-messages in real time. Second, it manipulates platform algorithms, persuading social media companies to down-rank “disinformation”—which means congressional hearings never surface in your feed and news about court decisions get buried. Third, it saturates public discourse through sheer volume, generating millions of messages across all platforms that drown out opposition not through censorship but through scale that private speakers can’t match. 

And all the while the First Amendment offers no constraint because the government speech doctrine allows the government to say whatever it wants, as much as it wants.

Information dominance makes resistance to the other mechanisms impossible. How do you organize opposition to emergency powers if you never hear about them? How do you resist perfect enforcement if you’ve been convinced it’s necessary? And how do you check national security decisions if you’re convinced of the threat—and if you can’t understand how the AI made the decision in the first place?

Which brings us to the fourth mechanism.

Mechanism 4: The National Security Black Box

National security is where presidential power reaches its apex. The Constitution grants the president enormous authority as commander in chief, with control over intelligence and classification, and courts have historically granted extreme judicial deference. Courts defer to military decisions, and the “political question” doctrine bars review of many national security judgments.

Congress retains constitutional checks—the power to declare war, appropriate funds, demand intelligence briefings, and conduct investigations. But AI creates what University of Virginia law professor Ashley Deeks calls the “double black box”—a problem that renders these checks ineffective.

The first—inner—box is AI’s opacity. AI systems are inscrutable black boxes that even their designers can’t fully explain. Congressional staffers lack technical expertise to evaluate them, and courts have no framework for passing judgment on algorithmic military judgments. No one—not even the executive branch officials nominally in charge—can explain why the AI reached a particular decision.

The second—outer—box is traditional national security secrecy. Classification shields operational details and the state secrets privilege blocks judicial review. The executive controls intelligence access, meaning Congress depends on the executive for the very information needed for oversight.

These layers combine: Congress can’t oversee what it can’t see or understand. Courts can’t review what they can’t access or evaluate. The public can’t hold anyone accountable for what’s invisible and incomprehensible.

And then speed makes things worse. AI operations complete in minutes, if not seconds, creating fait accompli before oversight can engage. By the time Congress learns what happened through classified briefings, facts on the ground have changed. Even if Congress could overcome both layers of inscrutability, it would be too late to restrain executive action.

Consider what this could look like in practice. It’s 3:47 a.m., and a foreign military AI probes U.S. critical infrastructure: This time it’s the industrial-control systems that control the eastern seaboard’s electrical grid.

Just 30 milliseconds later, U.S. Cyber Command’s AI detects the intrusion and assesses a 99.7 percent probability that this is reconnaissance for a future attack. 

Less than a second later, the AI decision tree executes. It evaluates options—monitoring is insufficient, counter-probing is inadequate, blocking would only be temporary—and selects a counterattack targeting foreign military command and control. The system accesses authorization from pre-delegated protocols and deploys malware.

Three minutes after the initial probe, the U.S. AI has disrupted foreign military networks, taking air defense offline, compromising communications, and destabilizing the attackers’ own power grids.

At 3:51 a.m., a Cyber Command officer is notified of the completed operation. At 7:30a.m., the president receives a briefing over breakfast of a serious military operation that she—supposedly the commander in chief—had no role in. But she’s still better off than congressional leadership, which only learns about the operation later that day when CNN breaks the story.

This won’t be an isolated incident. Each AI operation completes before oversight is possible, establishing precedent for the next. By the time Congress or courts respond, strategic facts have changed. The constitutional separation of war powers requires transparency time—both of which AI operations eliminate.

Mechanism 5: Realizing the Unitary Executive

The first four mechanisms—emergency powers, perfect enforcement, information dominance, and inscrutable national security decisions—expand the scope of presidential power. Each extends presidential reach.

But the fifth mechanism is different. It’s not about doing more but about controlling how it gets done. After all, how is a single president supposed to control a bureaucracy of nearly 3 million employees making untold decisions every day? The unitary executive theory has been debated for over two centuries and has recently become the dominant constitutional position at the Supreme Court. But in all this time it’s always been, practically speaking, impossible. AI removes that practical constraint.

Article II, Section 1, states that “The executive Power shall be vested in a President.” THE executive power. A President. Singular. This is the textual foundation for the unitary executive theory: the idea that all executive authority flows through one person and that this one person must therefore control all executive authority. 

The main battleground for this theory has been unilateral presidential firing authority. If the president can fire subordinates at will, control follows. The First Congress debated this in 1789, when James Madison proposed that department secretaries be removable by the president alone. Congress’s decision at the time implied that the president had such a power, but we’ve been fighting about presidential control ever since. 

The Supreme Court has zigzagged on this issue, from Myers in 1926 affirming presidential removal power, to Humphrey’s Executor less than a decade later carving out huge exceptions for independent agencies, to Morrison v. Olson in 1988, where Justice Antonin Scalia’s lone dissent defended the unitary executive. But by Seila Law v. CFPB in 2020, Scalia’s dissent had become the majority view. Unitary executive theory is now ascendant. (And we’ll see how far the Court pushes it when it decides on Federal Reserve Board independence later this term.)

But in a practical sense, the constitutional questions have always been second-order. Even if the president had constitutional authority for unitary control, practical reality made it impossible. Harry Truman famously quipped about Eisenhower upon his election in 1952: “He’ll sit here [in the Oval Office] and he’ll say, ‘Do this! Do that!’ And nothing will happen. Poor Ike—it won’t be a bit like the Army. He’ll find it very frustrating.”

One person just can’t process information from millions of employees, supervise 400 agencies, and know what subordinates are doing across the vast federal bureaucracy. Career civil servants can slow-roll directives, misinterpret guidance, quietly resist—or simply just not know what the president wants them to do. The real constraint on presidential power has always been practical, not constitutional.

But AI removes those constraints. It transforms the unitary executive theory from a constitutional dream into an operational reality.

Here’s a concrete example—real, not hypothetical. In January, the Trump administration sent a “Fork in the Road” email to federal employees: return to office, accept downsizing, pledge loyalty, or take deferred resignation. DOGE—the Department of Government Efficiency—deployed Meta’s Llama 2 AI model to review and classify responses. In a subsequent email, DOGE asked employees to describe weekly accomplishments and used AI to assess whether work was mission critical. If AI can determine mission-criticality, it can assess tone, sentiment, loyalty, or dissent.

DOGE analyzed responses to one email, but the same technology works for all emails, every text message, every memo, and every Slack conversation. Federal email systems are centrally managed, workplace platforms are deployed government-wide, and because Llama is open source, Meta can’t refuse to have its systems used in this way. And because federal employees have limited privacy expectations in their work communications, the Fourth Amendment permits most government surveillance. 

Monitoring is just the beginning. The real transformation comes from training AI on presidential preferences. The training data is everywhere: campaign speeches, policy statements, social media, executive orders, signing statements, tweets, all continuously updated. The result is an algorithmic representation of the president’s priorities. Call it TrumpGPT.

Deploy that model throughout the executive branch and you can route every memo through the AI for alignment checks, screen every agenda for presidential priorities, and evaluate every recommendation against predicted preferences. The president’s desires become embedded in the workflow itself.

But it goes further. AI can generate presidential opinions on issues the president never considered. Traditionally, even the wonkiest of presidents have had enough cognitive bandwidth for only 20, maybe 30 marquee issues—immigration, defense, the economy. Everything else gets delegated to bureaucratic middle management.

But AI changes this. The president can now have an “opinion” on everything. EPA rule on wetlands permits? The AI cross-references it with energy policy. USDA guidance on organic labeling? Check against agricultural priorities. FCC decision on rural broadband? Align with public statements on infrastructure. The president need not have personally considered these issues; it’s enough that the AI learned the president’s preferences and applies them. And if you’re worried about preference drift, just keep the model accurate through a feedback loop, periodically sampling a few decisions and validating them with the president.

And here’s why this matters: Once the president achieves AI-enabled control over the executive branch, all the other mechanisms become far more powerful. When emergency powers are invoked, the president can now deploy that authority systematically across every agency simultaneously through AI systems. Perfect enforcement becomes truly universal when presidential priorities are embedded algorithmically throughout government. Information dominance operates at massive scale when all executive branch communications are coordinated through shared AI frameworks. And inscrutable national security decisions multiply when every agency can act at machine speed under algorithmic control. Each mechanism reinforces the others.

Now, this might all sound like dystopian science fiction. But here’s what’s particularly disturbing: This AI-enabled control actually fulfills the Supreme Court’s vision of the unitary executive theory. It’s the natural synthesis of a 21st-century technology meeting this Court’s interpretation of an 18th-century document. Let me show you what I mean by taking the Court’s own reasoning seriously.

In Free Enterprise Fund v. PCAOB in 2010, the Court wrote: “The Constitution requires that a President chosen by the entire Nation oversee the execution of the laws.” And in Seila Law a decade later: “Only the President (along with the Vice President) is elected by the entire Nation.”

The argument goes like this: The president has unique democratic legitimacy as the only official elected by all voters. Therefore the president should control the executive branch. This is not actually a good argument, but let’s accept the Court’s logic for a moment.

If the president is the uniquely democratic voice that should oversee execution of all laws, then what’s wrong with an AI system that replicates presidential preferences across millions of decisions? Isn’t that the apogee of democratic accountability? Every bureaucratic decision aligned with the preferences of the only official chosen by the entire nation?

This is the unitary executive theory taken to its absurd, yet logical, conclusion.

Solutions

Let’s review. We’ve examined five mechanisms concentrating presidential power: emergency powers creating permanent crisis, perfect enforcement eliminating discretion, information dominance saturating discourse, the national security black box too opaque and fast for oversight, and AI making the unitary executive technologically feasible. Together they create an executive too fast, too complex, too comprehensive, and too powerful to constrain. 

So what do we do? Are there legal or institutional responses that could restrain the Unitary Artificial Executive before it fully materializes? 

Look, my job as an academic is to spot problems, not fix them. But it seems impolite to leave you all with a sense of impending doom. So—acknowledging that I’m more confident in the diagnosis than the prescription—let me offer some potential responses.

But before I do, let me be clear: Although I’ve spent the past half hour on doom and gloom, I’m the farthest thing from an AI skeptic. AI can massively improve government operations through faster service, better compliance, and reduced bias. At a time when Americans believe government is dysfunctional, AI offers real solutions. The question isn’t whether to use AI in government. We will, and we should. The question is how to capture these benefits while preventing unchecked concentration of power.

Legislative Solutions

Let’s start with legislative solutions. Congress could, for example, require congressional authorization before the executive branch deploys high-capability AI systems. It could limit emergency declarations to 30 or 60 days without renewal. And it could require explainable decisions with a human-in-the-loop for critical determinations.

But the challenges are obvious. Any president can veto restrictions on their own power, and in our polarized age it’s very hard to imagine a veto-proof majority. The president also controls how the laws are executed, so statutory requirements could be interpreted narrowly or ignored. Classification could shield AI systems from oversight. And “human-in-the-loop” requirements could become mere rubber-stamping.

Institutional and Structural Reforms

Beyond statutory text, we need institutional reforms. Start with oversight: Create an independent inspector general for AI with technical experts and clearance to access classified systems. But since oversight works only if overseers understand the technology, we also need to build congressional technical capacity by restoring the Office of Technology Assessment and expanding the Congressional Research Service’s AI expertise. Courts need similar resources—technical education programs and access to court-appointed AI experts. 

We could also work through the private sector, imposing explainability and auditing requirements on companies doing AI business with the federal government. And most ambitiously, we could try to embed legal compliance directly into AI architecture itself, designing “law-following AI” systems with constitutional constraints built directly into the models.

But, again, each of these proposals faces obstacles. Inspectors general risk capture by the agencies they oversee. Technical expertise doesn’t guarantee political will—Congress and courts may understand AI but still defer to the executive. National security classification could exempt government AI systems from explainability and auditing requirements. And for law-following AI, we still need to figure out how to train a model to teach it what “following the law” actually means.

Constitutional Responses

Maybe the problem is more fundamental. Maybe we need to rethink the constitutional framework itself.

Constitutional amendments are unrealistic—the last was 1992, and partisan polarization makes the Article V process nearly impossible.

So more promising would be judicial reinterpretation of existing constitutional provisions. Courts could hold that Article II’s Vesting and Take Care Clauses don’t prohibit congressional regulation of executive branch AI. Courts could use the non-delegation doctrine to require that Congress set clear standards for AI deployment rather than giving the executive blank-check authority. And due process could require algorithmic transparency and meaningful human oversight as constitutional minimums.

But maybe the deeper problem is the unitary executive theory itself. That’s why I titled this lecture “The Unitary Artificial Executive”—as a warning that this constitutional theory becomes even more dangerous once AI makes it technologically feasible.

So here’s my provocation to my colleagues in the academy and the courts who advocate for a unitary executive: Your theory, combined with AI, leads to consequences you never anticipated and probably don’t want. The unitary executive theory values efficiency, decisiveness, and unity of command. It treats bureaucratic friction as dysfunction. But what if that friction is a feature, not a bug? What if bureaucratic slack, professional independence, expert dissent—the messy pluralism of the administrative state—are what stands between us and tyranny?

The ultimate constitutional solution may require reconsidering the unitary executive theory itself. Perfect presidential control isn’t a constitutional requirement but a recipe for autocracy once technology makes it achievable. We need to preserve spaces where the executive doesn’t speak with one mind—whether that mind is human or machine.

Conclusion

I’ve just offered some statutory approaches, institutional reforms, and constitutional reinterpretations. But let’s be honest about the obstacles: AI develops faster than law can regulate it. Most legislators and judges don’t understand AI well enough to constrain it. And both parties want presidential power when they control it. 

But lawyers have confronted existential rule-of-law challenges before. After Watergate, the Church Committee reforms led to real constraints on executive surveillance. After 9/11, when crisis and executive power claimed unchecked detention authority, lawyers fought, forcing the Supreme Court to check executive overreach. When crisis and executive power threaten constitutional governance, lawyers have been the constraint.

And, to the students in the audience, let me say: You will be too.

You’re entering the legal profession at a pivotal moment. The next decade will determine whether constitutional government survives the age of AI. Lawyers will be on the front lines of this fight. Some will work in the executive branch as the humans in the loop. Some will work in Congress—drafting statutes and demanding explanations. Some will litigate—bringing cases, performing discovery, and forcing judicial confrontation.

The Unitary Artificial Executive is not inevitable. It’s a choice we’re making incrementally, often without realizing it. The question is: Will we choose to constrain it while we still can? Or will we wake up one day to find we’ve built a constitutional autocracy—not through a coup, but through code?

This is a problem we’re still learning to see. But seeing it is the first step. And you all will determine what comes next.

Thank you. I look forward to your questions.

The limits of regulating AI safety through liability and insurance

Any opinions expressed in this post are those of the authors and do not reflect the views of the Institute for Law & AI.

At the end of September, California governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act, S.B. 53, requiring large AI companies to report the risks associated with their technology and the safeguards they have put in place to protect against those risks. Unlike an earlier version of the bill, S.B. 1047, that Newsom vetoed a year earlier, this most recent version doesn’t focus on assigning liability to companies for harm caused by their AI systems. In fact, S.B. 53 explicitly limits financial penalties to $1 million for major incidents that kill more than 50 people or cause more than $1 billion in damage. 

This de-emphasizing of liability is deliberate—Democratic state Sen. Scott Wiener said in an interview with NBC News, “Whereas SB 1047 was more of a liability-focused bill, SB 53 is more focused on transparency.” But that’s not necessarily a bad thing. In spite of a strong push to impose greater liability on AI companies for the harms their systems cause, there are good reasons to believe that stricter liability rules for AI won’t make many types of AI systems safer and more secure. In a new paper, we argue that liability is of limited value in safeguarding against many of the most significant AI risks. The reason is that liability insurers, who would ordinarily help manage and price such risks, are unlikely to be able to model them accurately or to induce their insureds to take meaningful steps to limit exposure.

Liability and Insurance

Greater liability for AI risks will almost certainly result in a much larger role for insurers in providing companies with coverage for that liability. This, in turn, would make insurers one of the key stakeholders determining what type of AI safeguards companies must put in place to qualify for insurance coverage. And there’s no guarantee that insurers will get that right. In fact, when insurers sought to play a comparable role in the cybersecurity domain, their interventions proved largely ineffective in reducing policyholders’ overall exposure to cyber risk. And many of the challenges that insurers encountered in pricing and affirmatively mitigating cyber risk are likely to be even more profound when it comes to modeling and pricing many of the most significant risks associated with AI systems.

AI systems present a wide range of risks, some of which insurers may indeed be well equipped to manage. For example, insurers may find it relatively straightforward to gather data on car crashes involving autonomous vehicles and to develop reasonably reliable predictive models for such events. But many of the risks associated with generative and agentic AI systems are far more complex, less observable, and more heterogeneous, making it difficult for insurers to collect data, design effective safeguards, or develop reliable predictive models. These risks run the gamut from chatbots failing to alert anyone about a potentially suicidal user to giving customers incorrect advice and prices, to agents that place unwanted orders for supplies or services, develop malware that can be used to attack computer systems, or transfer funds incorrectly. For these types of risks—as well as more speculative potential catastrophic risks, such as AIs facilitating chemical or biological attacks—there is probably not going to be a large set of incidents that insurers can observe to build actuarial models, much less a clear consensus on how best to guard against them.

We know, from watching insurers struggle with how best to mitigate cyber risks, that when there aren’t reliable data sources for risks, or clear empirical evidence about how best to address those risks, it can be very difficult for insurers to play a significant role in helping policyholders do a better job of reducing their risk. When it comes to cyber risk, there have been several challenges that will likely apply as much—if not more—to the risks posed by many of today’s rapidly proliferating AI systems.

Lack of data

The first challenge that stymied insurers’ efforts to model cyber risks was simply a lack of good data about how often they occur and how much they cost. Other than breaches of personal data, organizations have historically not been required to report most cybersecurity incidents, though that is changing with the upcoming implementation of the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA). Since they weren’t required to report incidents like ransomware, cyber-espionage, and denial-of-service attacks, most organizations didn’t for fear of harming their reputation or inviting lawsuits and regulatory scrutiny. But because so many cybersecurity incidents were kept under wraps, insurers had a hard time when they began offering cyber insurance coverage figuring out how frequently these incidents occurred and what kinds of damage they typically caused. That’s why most cyber insurance policies were initially just data breach insurance—because there was at least some data on those breaches which were required to be reported under state laws. 

Even as their coverage expanded to include other types of incidents besides data breaches, and insurers built up their own claims data sets, they still encountered challenges in predicting cybersecurity incidents because the threat landscape was not static. As attackers changed their tactics and adapted to new defenses, insurers found that the past trends were not always reliable indicators of what future cybersecurity incidents would look like. Most notably, in 2019 and 2020, insurers experienced a huge spike in ransomware claims that they had not anticipated, leading them to double and triple premiums for many policyholders in order to keep pace with the claims they faced.

Many AI incidents, like cybersecurity incidents, are not required by law to be reported and are therefore probably not made public. This is not uniformly true of all AI risks, of course. For instance, car crashes and other incidents with visible, physical consequences are very public and difficult—if not impossible—to keep secret. For these types of risks, especially if they occur at a high enough frequency to allow for the collection of robust data sets, insurers may be able to build reliable predictive models. However, many other types of risks associated with AI systems—including those linked to agentic and generative AI—are not easily observable by the outside world. And in some cases, it may be difficult, or even impossible, to know what role AI has played in an incident. If an attacker uses a generative AI tool to identify a software vulnerability and write malware to exploit that vulnerability, for instance, the victim and their insurer may never know what role AI played in the incident. This means that insurers will struggle to collect consistent or comprehensive historic data sets about these risks.

AI risks may, too, change over time, just as cyber risks do. Here, again, this is not equally true of all AI risks. While cybersecurity incidents almost always involve some degree of adversarial planning—an attacker trying to compromise a computer system and adapting to safeguards and new technological developments—the same is not true of all AI incidents, which can result from errors or limitations in the technology itself, not necessarily any deliberate manipulation. But there are deliberate attacks on AI systems that insurers may struggle to predict using historical data—and even the incidents that are accidental rather than malicious may change and evolve considerably over time given how quickly AI systems are changing and being applied to new areas. All of these challenges point to the likelihood that insurers will have a hard time modeling these types of AI risks and will therefore struggle to price them, just as they have with cyber risks.

Difficulty of Risk Assessments

Another major challenge insurers have encountered in the cyber insurance industry is how to assess whether a company has done a good job of protecting itself against cyber threats. The industry standard for these assessments are long questionnaires that companies fill out about their security posture but that often fail to capture the key technical nuances about how safeguards like encryption and multi-factor authentication are implemented and configured. This makes it difficult for insurers to link premiums to their policyholders’ risk exposure because they don’t have any good way of measuring that risk exposure. So instead, most premiums are set according to how much revenue a company generates or its industry sector. This means that companies often aren’t rewarded for investing in more security safeguards with lower premiums and therefore have little incentive to make those investments.

A similar—and arguably greater—challenge exists for assessing organizations’ exposure to AI risks. AI risks are so varied and AI systems are so complex that identifying all of the relevant risks and auditing all of the technical components and code related to those risks requires technical experts that most insurers are unlikely to have in-house. While insurers may try partnering with tech firms to perform these assessments—as they have in the past for cybersecurity assessments—they will also probably face pressure from brokers and clients to keep the assessment process lightweight and non-intrusive to avoid losing customers to their competitors. This has certainly been the case in the cyber insurance market, where many carriers continue to rely on questionnaires instead of other, more accurate assessment methods in order to avoid upsetting their clients. 

But if insurers can’t assess their customers’ risk exposure, then they can’t help drive down that risk by rewarding the firms who have done the most to reduce their risk with lower premiums. To the contrary, this method of measuring and pricing risk signals to insureds that investments in risk mitigation are not worthwhile, since such efforts have little effect on premiums and primarily benefit insurers by reducing their exposure. This is yet another reason to be cautious about the potential for insurers to help make AI systems safer and more secure.

Uncertainty About Risk Mitigation Best Practices

Figuring out how to assess cyber risk exposure is not the only challenge insurers encountered when it came to underwriting cyber insurance. They also struggled with figuring out what safeguards and security controls they should demand of their policyholders. While many insurers require common controls like encryption, firewalls, and multi-factor authentication, they often lack good empirical evidence about which of these security measures are most effective. Even in their own claims data sets, insurers don’t always have reliable information about which safeguards were or were not in place when incidents occurred, because the very lawyers insurers supply to oversee incident investigations sometimes don’t want that information recorded or shared for fear of it being used in any ensuing litigation.

The uncertainty about which best practices insurers should require from their customers is even greater when it comes to measures aimed at making many types of AI systems safer and more secure. There is little consensus about how best to do that beyond some broad ideas about audits, transparency, testing, and red teaming. If insurers don’t know which safeguards or security measures are most effective, then they may not require the right ones, further weakening their ability to reduce risk for their policyholders.

Catastrophic Risk

One final characteristic that AI and cyber risks share is the potential for really large-scale, interconnected incidents, or catastrophic risks, that will generate more damage than insurers can cover. In cyber insurance, the potential for catastrophic risks stems in part from the fact that all organizations rely on a fairly centralized set of software providers, cloud providers, and other computing infrastructure. This means that an attack on the Windows operating system, or Amazon Web Services, could cause major damage to an enormous number of organizations in every country and spanning every industry sector, creating potentially huge losses for insurers since they would have no way to meaningfully diversify their risk pools. This has led to cyber insurers and reinsurers being relatively cautious in how much cyber risk they underwrite and maintaining high deductibles for these policies. 

AI foundation models and infrastructure are similarly concentrated in a small number of companies, indicating that there is similar potential for an incident targeting one model to have far-reaching consequences. Future AI systems may also pose a variety of catastrophic risks, such as the possibility of these systems turning against humans or causing major physical accidents. Such catastrophic risks pose particular challenges for insurers and can make them more wary of offering large policies, which may in turn make some companies discount these risks entirely notwithstanding the prospect of liability. 

Liability Limitation or Risk Reduction?

In general, the cyber insurance example suggests that when it comes to dealing with risks for which we do not have reliable data sets, cannot assess firms’ risk levels, do not know what the most effective safeguards are, and have some potential for catastrophic consequences, insurers will end up helping their customers limit their liability but not actually reduce their risk exposure. For instance, in the case of cyber insurance, this may mean involving lawyers early in the incident response process so that any relevant information is shielded against discovery in future litigation—but not actually meaningfully changing the preventive security controls firms have in place to make incidents less likely to occur. 

It is easy to imagine that imposing greater liability on AI companies could produce a similar outcome, where insurers intervene to help reduce that liability—perhaps by engaging legal counsel or mandating symbolic safeguards aimed at minimizing litigation or regulatory exposure—without meaningfully improving the safety or security of the underlying AI systems. That’s not to say insurers won’t play an important role in covering certain types of AI risks, or in helping pool risks for new types of AI systems. But it does suggest they will be able to do little to incentivize tech companies to put better safeguards in place for many of their AI systems.

That’s why California is wise to be focusing on reporting and transparency rather than liability in its new law. Requiring companies to report on risks and incidents can help build up data sets that enable insurers and governments to do a better job of measuring risks and the impact of different policy measures and safeguards. Of course, regulators face many of the same challenges that insurers do when it comes to deciding which safeguards to require for high-risk AI systems and how to mitigate catastrophic risks. But at the very least, regulators can help build up more robust data sets about the known risks associated with AI, the safeguards that companies are experimenting with, and how well they work to prevent different types of incidents. 

That type of regulation is badly needed for AI systems, and it would be a mistake to assume that insurers will take on the role of data collection and assessment themselves, when we have seen them try and fail to do that for more than two decades in the cyber insurance sector. The mandatory reporting for cybersecurity incidents that will go into effect next year under CIRCIA could have started twenty years ago if regulators hadn’t assumed that the private sector—led by insurers—would be capable of collecting that data on its own. And if it had started twenty years ago, we would probably know much more than we do today about the cyber threat landscape and the effectiveness of different security controls—information that would itself lead to a stronger cyber insurance industry. 

If regulators are wise, they will learn the lessons of cyber insurance and push for these types of regulations early on in the development of AI rather than focusing on imposing liability and leaving it in the hands of tech companies and insurers to figure out how best to shield themselves from that liability. Liability can be useful for dealing with some AI risks, but it would be a mistake not to recognize its limits when it comes to making emerging technologies safer and more secure.

OUP book: Architectures of global AI governance

Unbundling AI openness

Why give AI agents actual legal duties?

The core proposition of Law-Following AI (LFAI) is that AI agents should be designed to refuse to take illegal actions in the service of their principals. However, as Ketan and I explain in our writeup of LFAI for Lawfare, this raises a significant legal problem: 

[A]s the law stands, it is unclear how an AI could violate the law. The law, as it exists today, imposes duties on persons. AI agents are not persons, and we do not argue that they should be. So to say “AIs should follow the law” is, at present, a bit like saying “cows should follow the law” or “rocks should follow the law”: It’s an empty statement because there are at present no applicable laws for them to follow.

Let’s call this the Law-Grounding Problem for LFAI. LFAI requires defining AI actions as either legal or illegal. The problem arises because courts generally cannot reason about the legality of actions taken by an actor without some sort of legally recognized status, and AI systems currently lack any such status.[ref 1]

In the LFAI article, we propose solving the Law-Grounding Problem by making AI agents “legal actors”: entities on which the law actually imposes legal duties, even if they have no legal rights. This is explained and defended more fully in Part II of the article. Let’s call this the Actual Approach to the Law-Grounding Problem.[ref 2] Under the Actual Approach, claims like “that AI violated the Sherman Act” are just as true within our legal system as claims like “Jane Doe violated the Sherman Act.”

There is, however, another possible approach that we did not address fully in the article: saying that an AI agent has violated the law if it took an action that, if taken by a human, would have violated the law.[ref 3] Let’s call this the Fictive Approach to the Law-Grounding Problem. Under the Fictive Approach, claims like “that AI violated the Sherman Act” would not be true in the same way that statements like “Jane Doe violated the Sherman Act.” Instead, statements like “that AI violated the Sherman Act” would be, at best, a convenient shorthand for statements like “that AI took an action that, if taken by a human, would have violated the Sherman Act.”

I will argue that the Actual Approach is preferable to the Fictive Approach in some cases.[ref 4] Before that, however, I will explain why someone might be attracted to the Fictive Approach in the first place.

Motivating the Fictive Approach

To say that something is fictive is not to say that it is useless; legal fictions are common and useful. The Fictive Approach to the Law-Grounding Problem has several attractive features.

The first is its ease of implementation: the Fictive Approach does not require any fundamental rethinking of legal ontology. We do not need to either grant AI agents legal personhood or create a new legal category for them.

The Fictive Approach might also track common language use: when people make statements like “Claude committed copyright infringement,” they probably mean it in the fictive sense. 

Finally, the Fictive Approach also mirrors how we think about similar problems, like immunity doctrines. The King of England may be immune from prosecution, but we can nevertheless speak intelligibly of his actions as lawful or unlawful by analyzing what the legal consequences would be if he were not immune.

Why prefer the Actual Approach?

Nevertheless, I think there are good reasons to prefer the Actual Approach over the Fictive Approach.

Analogizing to Humans Might Be Difficult

The strongest reason, in my opinion, is that AI agents may “think” and “act” very differently from humans. The Fictive Approach requires us to take a string of actions that an AI did and ask whether a human who performed the same actions would have acted illegally. The problem is that AI agents can take actions that could be very hard for humans to take, and so judges and jurors might struggle to analyze the legal consequences of a human doing the same thing. 

Today’s proto-agents are somewhat humanlike in that they receive instructions in natural language, use computer tools designed for humans, reason in natural language, and generally take actions serially at approximately human pace and scale. But we should not expect this paradigm to last. For example, AI agents might soon:

And these are just some of the most foreseeable; over time, AI agents will likely become increasingly alien in their modes of reasoning and action. If so, then the Fictive Approach will become increasingly strained: judges and jurors will find themselves trying to determine whether actions that no human could have taken would have violated the law if performed by a human. At a minimum, this would require unusually good analogical reasoning skills; more likely, the coherence of the reasoning task would break down entirely.

Developing Tailored Laws and Doctrines for AIs

LFAI is motivated in large part by the belief that AI agents that are aligned to “a broad suite of existing laws”[ref 5] would be much safer than AI agents unbound by existing laws. But new laws specifically governing the behavior of AI agents will likely be necessary as AI agents transform society.[ref 6] However, the Fictive Approach would not be effective for new AI-specific laws. Recall that the Fictive Approach says that an action by an AI agent violates a law just in the case that a human who took that action would have violated that law. But if the law in question would only apply to an AI agent, the Fictive Approach cannot be applied: a human could not violate the law in question. 

Relatedly, we may wish to develop new AI-specific legal doctrines, even for laws that apply to both humans and AIs. For example, we might wish to develop new doctrines for applying existing laws with a mental state component to AI agents.[ref 7] Alternatively, we may need to develop doctrines for determining when multiple instances of the same (or similar) AI models should be treated as identical actors. But the Fictive Approach is in tension with the development of AI-specific doctrines, since the whole point of the Fictive Approach is precisely to avoid reasoning about AI systems in their own right.

These conceptual tensions may be surmountable. But as a practical matter, a legal ontology that enables courts and legislatures to actually reason about AI systems in their own right seems more likely to lead to nuanced doctrines and laws that are responsive to the actual nature of AI systems. The Fictive Approach, by contrast, encourages courts and legislatures to attempt to map AI actions onto human actions, which may thereby overlook or minimize the significant differences between humans and AI systems.

Grounding Respondeat Superior Liability

Some scholars propose using respondeat superior to impose liability on the human principals of AI agents for any “torts” committed by the latter.[ref 8] However, “[r]espondeat superior liability applies only when the employee has committed a tort. Accordingly, to apply respondeat superior to the principals of an AI agent, we need to be able to say that the behavior of the agent was tortious.”[ref 9] We can only say that the behavior of an AI agent was truly tortious if it had a legal duty to violate. The Actual Approach allows for this; the Fictive Approach does not.

Of course, another option is simply to use the Fictive Approach for the application of respondeat superior liability as well. However, the Actual Approach seems preferable insofar as it doesn’t require this additional change. More generally, precisely because the Actual Approach integrates AI systems into the legal system more fully, it can be leveraged to parsimoniously solve problems in areas of law beyond LFAI.

In the LFAI article, we take no position as to whether AI agents should be given legal personhood: a bundle of duties and rights.[ref 10] However, there may be good reasons to grant AI agents some set of legal rights.[ref 11] 

Treating AI agents as legal actors under the Actual Approach creates optionality with respect to legal personhood: if the law recognizes an entity’s existence and imposes duties on it, it is easier for the law to subsequently grant that entity rights (and therefore personhood). But, we argue, the Actual Approach creates no obligation to do so:[ref 12] the law can coherently say that an entity has duties but no rights. Since it is unclear whether it is desirable to give AIs rights, this optionality is desirable. 

*      *      *

AI companies[ref 13] and policymakers[ref 14] are already tempted to impose legal duties on AI systems. To make serious policy progress towards this, they will need to decide whether to actually do so, or merely use “lawbreaking AIs” as shorthand for some strained analogy to lawbreaking humans. Choosing the former path—the Actual Approach—is simpler and more adaptable, and therefore preferable. 

Protecting AI whistleblowers

In May 2024, OpenAI found itself at the center of a national controversy when news broke that the AI lab was pressuring departing employees to sign contracts with extremely broad nondisparagement and nondisclosure provisions—or else lose their vested equity in the company. This would essentially have required former employees to avoid criticizing OpenAI for the indefinite future, even on the basis of publicly known facts and nonconfidential information.

Although OpenAI quickly apologized and promised not to enforce the provisions in question, the damage had already been done—a few weeks later, a number of current and former OpenAI and Google DeepMind employees signed an open letter calling for a “right to warn” about serious risks posed by AI systems, noting that “[o]rdinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”

The controversy over OpenAI’s restrictive exit paperwork helped convince a number of industry employees, commentators, and lawmakers of the need for new legislation to fill in gaps in existing law and protect AI industry whistleblowers from retaliation. This culminated recently in the AI Whistleblower Protection Act (AI WPA), a bipartisan bill introduced by Sen. Chuck Grassley (R-Iowa) along with a group of three Republican and three Democratic senators. Companion legislation was introduced in the house by Reps. Ted Lieu (D-Calif.) and Jay Obernolte (R-Calif.).

Whistleblower protections such as the AI WPA are minimally burdensome, easy to implement and enforce, and plausibly useful for facilitating government access to the information needed to mitigate AI risks. They also have genuine bipartisan appeal, meaning there is actually some possibility of enacting them. As increasingly capable AI systems continue to be developed and adopted, it is essential that those most knowledgeable about any dangers posed by these systems be allowed to speak freely.

Why Whistleblower Protections?

The normative case for whistleblower protections is simple: Employers shouldn’t be allowed to retaliate against employees for disclosing information about corporate wrongdoing. The policy argument is equally straightforward—company employees often witness wrongdoing well before the public or government becomes aware but can be discouraged from coming forward by fear of retaliation. Prohibiting retaliation is an efficient way of incentivizing whistleblowers to come forward and a strong social signal that whistleblowing is valued by governments (and thus worth the personal cost to whistleblowers).

There is also reason to believe that whistleblower protections could be particularly valuable in the AI governance context. Information is the lifeblood of good governance, and it’s unrealistic to expect government agencies and the legal system to keep up with the rapid pace of progress in AI development. Often, the only people with the information and expertise necessary to identify the risks that a given model poses will be the people who helped create it.

Of course, there are other ways for governments to gather information on emerging risks. Prerelease safety evaluationsthird-party auditsbasic registration and information-sharing requirements, and adverse event reporting are all tools that help governments develop a sharper picture of emerging risks. But these tools have mostly not been implemented in the U.S. on a mandatory basis, and there is little chance they will be in the near future.

Furthermore, whistleblower disclosures are a valuable source of information even in thoroughly regulated and relatively well-understood contexts like securities trading. In fact, the Securities and Exchange Commission has awarded more than $2.2 billion to more than 444 whistleblowers since its highly successful whistleblower program began in 2012. We therefore expect AI whistleblowers to be a key source of information no matter how sophisticated the government’s other information-gathering authorities (which, currently, are almost nonexistent) become.

Whistleblower protections are also minimally burdensome. A bill like the AI WPA imposes no affirmative obligations on affected companies. It doesn’t prevent them from going to market or integrating models into useful products. It doesn’t require them to jump through procedural hoops or prescribe rigid safety practices. The only thing necessary for compliance is to refrain from retaliating against employees or former employees who lawfully disclose important information about wrongdoing to the government. It seems highly unlikely that this kind of common-sense restriction could ever significantly hinder innovation in the AI industry. This may explain why even innovation-focused, libertarian-minded commentators like Martin Casado of Andreesen Horowitz and Dean Ball have reacted favorably to AI whistleblower bills like California SB 53, which would prohibit retaliation against whistleblowers who disclose information about “critical risks” from frontier AI systems. It’s worth noting that the sponsor of the AI WPA’s House companion bill was introduced by Rep. Obernolte, who has been the driving force behind the controversial AI preemption provision in the GOP reconciliation bill.

The AI Whistleblower Protection Act

Beyond the virtues of whistleblower protections generally, how does the actual whistleblower bill currently making its way through Congress stack up?

In our opinion, favorably. A few weeks ago, we published a piece on how to design AI whistleblower legislation. The AI WPA checks almost all of the boxes we identified, as discussed below.

Dangers to Public Safety

First, and most important, the AI WPA fills a significant gap in existing law by protecting disclosures about “dangers” to public safety even if the whistleblower can’t point to any law violation by their employer. Specifically, the law protects disclosures related to a company’s failure to appropriately respond to “substantial and specific danger[s]” to “public safety, public health, or national security” posed by AI, or about “security vulnerabilit[ies]” that could allow foreign countries or other bad actors to steal model weights or algorithmic secrets from an AI company. This is significant because the most important existing protection for whistleblowers at frontier AI companies—California’s state whistleblower statute—only protects disclosures about law violations.

It’s important to protect disclosures about serious dangers even when no law has been violated because the law, with respect to emerging technologies like AI, often lags far behind technological progress. When the peer-to-peer file sharing service Napster was founded in 1999, it wasn’t immediately clear whether its practices were illegal. By the time court decisions resolved the ambiguity, a host of new sites using slightly different technology had sprung up and were initially determined to be legal before the Supreme Court stepped in and reversed the relevant lower court decisions in 2005. In a poorly understood, rapidly changing, and almost totally unregulated area like AI development, the prospect of risks arising from behavior that isn’t clearly prohibited by any existing law is all too plausible.

Consider a hypothetical: An AI company trains a new cutting-edge model that beats out its competitors’ latest offerings on a wide variety of benchmarks, redefining the state of the art for the nth time in as many months. But this time, a routine internal safety evaluation reveals that the new model can, with a bit of jailbreaking, be convinced to plan and execute a variety of cyberattacks that the evaluators believe would be devastatingly effective if carried out, causing tens of millions of dollars in damage and crippling critical infrastructure. The company, under intense pressure to release a model that can compete with the newest releases from other major labs, implements safeguards that employees believe can be easily circumvented but otherwise ignores the danger and misrepresents the results of its safety testing in public statements.

In the above hypothetical, is the company’s behavior unlawful? An enterprising prosecutor might be able to make charges stick in the aftermath of a disaster, because the U.S. has some very broad criminal laws that can be creatively interpreted to prohibit a wide variety of behaviors. But the illegality of the company’s behavior is at the very least highly uncertain.

Now, suppose that an employee with knowledge of the safety testing results reported those results in confidence to an appropriate government agency. Common sense dictates that the company shouldn’t be allowed to fire or otherwise punish the employee for such a public-spirited act, but under currently existing law it is doubtful whether the whistleblower would have any legal recourse if terminated. Knowing this, they might well be discouraged from coming forward in the first place. This is why establishing strong, clear protections for AI employees who disclose information about serious threats to public safety is important. This kind of protection is also far from unprecedented—currently, federal employees enjoy a similar protection for disclosures about “substantial and specific” dangers, and there are also sector-specific protections for certain categories of private-sector employees such as (for example) railroad workers who report “hazardous safety or security conditions.”

Importantly, the need to protect whistleblowers has to be weighed against the legitimate interest that AI companies have in safeguarding valuable trade secrets and other confidential business information. A whistleblower law that is too broad in scope might allow disgruntled employees to steal from their former employers with impunity and hand over important technical secrets to competitors. The AI WPA, however, sensibly limits its danger-reporting protection to disclosures made to appropriate government officials or internally at a company regarding “substantial and specific danger[s]” to “public safety, public health, or national security.” This means that, for better or worse, reporting about fears of highly speculative future harms will probably not be protected, nor will disclosures to the media or watchdog groups.

Preventing Contractual Waivers of Whistleblower Rights

Another key provision states that contractual waivers of the whistleblower rights created by the AI WPA are unenforceable. This is important because nondisclosure and nondisparagement agreements are common in the tech industry, and are often so broadly worded that they purport to prohibit an employee or former employee from making the kinds of disclosures that the AI WPA is intended to protect. It was this sort of broad nondisclosure agreement (NDA) that first sparked widespread public interest in AI whistleblower protections during the 2024 controversy over OpenAI’s exit paperwork.

OpenAI’s promise to avoid enforcing the most controversial parts of its NDAs did not change the underlying legal reality that allowed OpenAI to propose the NDAs in the first place, and that would allow any other frontier AI company to propose similarly broad contractual restrictions in the future. As we noted in a previous piece on this subject, there is some chance that attempts to enforce such restrictions against genuine whistleblowers would be unsuccessful, because of either state common law or existing state whistleblower protections. Even so, the threat of being sued for violating an NDA could discourage potential whistleblowers even if such a lawsuit might not eventually succeed. A clear federal statutory indication that such contracts are unenforceable would therefore be a welcome development. The AI WPA, which clearly resolves the NDA issue by providing that “[t]he rights and remedies provided for in this section may not be waived or altered by any contract, agreement, policy form, or condition of employment,” would provide exactly this.

Looking Forward

It’s not clear what will happen to the AI Whistleblower Protection Act. It appears as likely to pass as any AI measure we’ve seen, given the substantial bipartisan enthusiasm behind it and the lack of any substantial pushback from industry to date. But it is difficult in general to pass federal legislation, and the fact that there has been very little in the way of vocal opposition to this bill to date doesn’t mean that dissenting voices won’t make themselves heard in the coming weeks.

Regardless of what happens to this specific bill, those who care about governing AI well should continue to support efforts to pass something like the AI WPA. However concerned or unconcerned one may be about the dangers posed by AI, the bill as a whole serves a socially valuable purpose: establishing a uniform whistleblower protection regime for reports about security vulnerabilities and lawbreaking in a critically important industry.

Ten Highlights of the White House’s AI Action Plan

Today, the White House released its AI Action Plan, laying out the administration’s priorities for AI innovation, infrastructure, and adoption. Ultimately, the value of the Plan will depend on how it is operationalized via executive orders and the actions of executive branch agencies, but the Plan itself contains a number of promising policy recommendations. We’re particularly excited about:  

  1. The section on federal government evaluations of national security risks in frontier models. This section correctly identifies the possibility that “the most powerful AI systems may pose novel national security risks in the near future,” potentially including risks from cyberattacks and risks related to the development of chemical, biological, radiological, nuclear, or explosive (CBRNE) weapons. Ensuring that the federal government has the personnel, expertise, and authorities needed to guard against these risks should be a bipartisan priority. 
  2. The discussion of interpretability and control, which recognizes the importance of interpretability to the use of advanced AI systems in national security and defense applications. The Plan also recommends three policy actions for advancing the science of interpretability, each of which seems useful for frontier AI security in expectation.
  3. The overall focus on standard-setting by the Center for AI Standards and Innovation (CAISI, formerly known as the AI Safety Institute) and other government agencies, in partnership with industry, academia, and civil society organizations.
  4. The recommendation on building an AI evaluations ecosystem. The science of evaluating AI systems’ capabilities is still in its infancy, but the Plan identifies a few promising ways for CAISI and other government agencies to support the development of this critical field.
  5. The emphasis on physical and cybersecurity for frontier labs and bolstering critical infrastructure cybersecurity. As Leopold Aschenbrenner pointed out in “Situational Awareness,” AI labs are not currently equipped to protect their model weights and algorithmic secrets from being stolen by China or other geopolitical rivals of the U.S., and fixing this problem is a crucial national security imperative. 
  6. The call to improve the government’s capacity for AI incident response. Advanced planning and capacity-building are crucial for ensuring that the government is prepared to respond in the event of an AI emergency. Incident response preparation is an effective way to increase resiliency without directly burdening innovation.
  7. The section on how the legal system should handle deceptive AI-generated “evidence.” Legal rules often lag behind technological development, and the guidance contemplated here could be highly useful to courts that might otherwise be unprepared to handle an influx of unprecedentedly convincing fake evidence. 
  8. The recommendations for ramping up export control enforcement and plugging loopholes in existing semiconductor export controls. Compute governance—preventing geopolitical rivals from gaining access to the chips needed to train cutting-edge frontier AI models—continues to be an effective policy tool for maintaining the U.S.’s lead in the race to develop advanced AI systems before China. 
  9. The suggested regulatory sandboxes, which could enable AI adoption and increase the AI governance capacity of sectoral regulatory agencies like the FDA and the SEC.
  10. The section on deregulation wisely rejects the maximalist position of the moratorium that was stripped from the recent reconciliation bill by a 99-1 Senate vote. Instead of proposing overbroad and premature preemption of virtually all state AI regulations, the Plan recommends that AI-related federal funding should not “be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.”
    • At the moment, it’s hard to identify any significant source of “AI-related federal funding” to states, although this could change in the future. This being the case, it will likely be difficult for the federal government to offer states any significant inducement towards deregulation unless it first offers them new federal money. And disincentivizing truly “burdensome” state regulations that would interfere with the effectiveness of federal grants seems like a sensible alternative to broader forms of preemption.
    • The Plan also seems to suggest that the FCC could preempt some state AI regulations under § 253 of the Communications Act of 1934. It remains to be seen whether and to what extent this kind of preemption is legally possible. At first glance, however, it seems unlikely that the FCC’s authority to regulate telecommunications services could legally be used for any especially broad preemption of state AI laws. Any broad FCC preemption under this authority would likely have to go through notice and comment procedures and might struggle to overcome legal challenges from affected states.

Christoph Winter’s remarks to the European Parliament on AI Agents and Democracy

Summary

On July 17th, LawAI’s Director and Founder, Christoph Winter, was invited to speak before the European Parliament’s Special Committee on the European Democracy Shield with participation of IMCO and LIBE Committee members. Professor Winter was asked to present on AI governance, regulation and democratic safeguards. He spoke about the democratic challenges that AI agents may present and how democracies could approach these challenges. 

Two recommendations were made to the Committee:

Transcript

Distinguished Members of Parliament, fellow speakers and experts,

Manipulating public opinion at scale used to require vast resources. This situation is changing quickly. During Slovakia’s 2023 election a simple deepfake audio recording of a candidate discussing vote-buying schemes circulated just 48 hours before polls opened, which was too late for fact-checking, but not too late to reach thousands of voters. And deepfakes are really just the beginning.

AI agents, which are autonomous systems that can act on the internet like skilled human workers, are being developed by all major AI companies. And soon they could be able to simultaneously orchestrate large-scale manipulation campaigns, hack electoral systems, and coordinate cyber-attacks on fact-checkers—all while operating 24/7 at unprecedented scale.

Today, I want to propose two solutions to these democratic challenges. First, requiring AI agents to be Law-following by design. And second, strengthening the AI Office’s capacity to understand and address AI risks. Let me explain each.

Law-following AI requires AI systems to be architecturally constrained to refuse actions that would be illegal if performed by humans in the same position. Just as AIs are currently trained to decline to help build bombs, they would reject orders to violate constitutional rights or election laws.

Law-following AI is democratically compelling for three reasons: First, it is democratically legitimate. Laws represent our collective will, refined through democratic deliberation, rather than unilaterally determined corporate values. Second, it enables democratic adaptability. Laws can be changed through democratic processes, and AI agents designed to follow law can automatically adjust their behavior. Third, it offers a democratic shield—because without these constraints, we risk creating AI agents that blindly follow orders, and history has shown where blind obedience leads.

In practice, this would mean that AI agents bound by law would refuse orders to suppress political speech, manipulate elections, blackmail officials, or harass dissidents. This way, law-following AI could prevent authoritarian actors from using obedient AI agents to entrench their power. Of course, it can’t prevent all forms of manipulation—much harmful persuasion operates within legal bounds. But blocking AI agents from illegal attacks on democracy is a critical first step.

The EU’s Code of Practice on General-Purpose AI already recognizes this danger and identifies “lawlessness” as a model propensity that contributes to systemic risk. But just as we currently lack reliable methods to assess how persuasive AI systems are, we currently lack a way to reliably measure AI lawlessness.

And perhaps most concerningly—and this brings me to my second proposal—the AI Office currently lacks the institutional capacity to develop these crucial capabilities.

The AI Office needs sufficient technical, policy, and legal staff to rigorously analyze what companies submit under the Code of Practice and AI Act—to scrutinize their risk assessments, verify their mitigation measures, and spot gaps in their safety evaluations. In other words: When a company claims their AI agent is law-following, the AI Office must have the expertise and resources to independently test that claim. When developers report on persuasion capabilities—capabilities that even they may not fully understand—the AI Office needs experts who can identify what’s missing from those reports.

Rigorous evaluation isn’t just about compliance—it’s about how we learn: each assessment and each gap we identify builds our understanding of these systems. This is why adequate AI Office capacity matters: not just for evaluating persuasion capabilities or Law-following AI today, but for understanding and preparing for risks to democracy that grow with each model release.

To illustrate what the current resource gap looks like: Recent reports suggest Meta offered one AI researcher a salary package of  €190 million. The AI Office—tasked with overseeing the entire industry—operates on less.

This gap between private power and public capacity is unsustainable for our democracy. If we’re serious about democracy, we must fund our institutions accordingly.

So to protect democracy, we can start with two things: AI agents bound by human laws, and an AI Office with the capacity to understand and evaluate the risks.

Thank you.

The full video can be watched here (starts 12:01:02).

Future frontiers for research in law and AI

LawAI’s Legal Frontiers team aims to incubate new law and policy proposals that are simultaneously:

  1. Anticipatory, in that they respond to a reasonable forecast of the legal and policy challenges that further advances in AI will produce
  2. Actionable, in that we can make progress within these workstreams even under significant uncertainty
  3. Accommodating to a wide variety of worldviews and technological trajectories, given the shared challenges that AI will create and the uncertainties we have about likely developments
  4. Ambitious, in that they both significantly reduce some of the largest risks from AI while also enabling society to reap its benefits

Currently, the Legal Frontiers team owns two workstreams:

  1. AI Agents and the Rule of Law
  2. International Regulatory Institutions

However, the general vision behind Legal Frontiers is to continuously spin out mature workstreams to free us to identify and incubate new ones. To that end, we recently updated our LawAI’s Workstreams and Research Directions document to list some “Future Frontiers” on which we might work in the future. 

However, we don’t want people to wait for us to start working on these questions: they are already ripe for scholarly attention. To that end, we have reproduced those Future Frontiers here.

Regulating Government-Developed Frontier AI

Today, governments primarily act as a consumer of frontier AI technologies. Frontier AI systems are primarily developed by private companies with little or no initial government involvement. Those companies may then tailor their general frontier AI offerings to meet the particular needs of governmental customers.[ref 1] However, the private sector is generally responsible for the primary development of frontier AI models and systems, with governmental steering entering, if at all, later in the commercialization lifecycle. 

However, as governments increasingly realize the significant strategic implications of frontier AI technologies, they may wish to become more directly involved in the development of frontier AI systems at earlier stages of the development cycle.[ref 2] This could range from frontier AI systems initially developed under government contract, to a fully governmental effort to develop next-generation frontier AI systems.[ref 3] Indeed, a 2024 report from the U.S.-China Economic and Security Review Commission called for Congress to “establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability.”[ref 4] 

Existing proposals for the regulation of the development and deployment of frontier AI systems envision the imposition of such regulations on private businesses, under the implicit assumption that frontier AI development and deployment will remain private-led. If and when governments do take a larger role in the development of frontier AI systems, new regulatory paradigms will be needed. Such proposals need to identify and address unique challenges and opportunities that government-led AI development will pose, as compared to today’s private-led efforts.

Examples of possible questions in this workstream could include:

  1. How are safety and security risks in high-stakes governmental research projects (e.g., the Manhattan Project) usually regulated?
  2. How might the government steer development of frontier AI technologies if it wished to do so?
  3. What existing checks and balances would apply to a  government program to develop frontier AI technologies?
  4. How would ideal regulation of government-directed frontier AI development vary depending on the mechanism used for such direction (e.g., contract versus government-run development)?
  5. How might ideal regulation of government-directed frontier AI development vary depending on whether the development is led by military or civilian parts of the government?
  6. If necessary to procure key inputs for the program (e.g. compute), how could the US government collaborate with select allies on such programs?[ref 5]

Accelerating Technologies that Defend against Risks from AI

It is likely infeasible[ref 6] and/or undesirable[ref 7] to fully prevent the wide proliferation of many high-risk AI systems. There is therefore increasing interest in developing technologies[ref 8] to defend against possible harms from diffuse AI systems, and remedy those harms where defensive measures fail.[ref 9] Collectively, we call these “defensive technologies.”[ref 10] 

Many of the most valuable contributions to the development and deployment of defensive technologies will not come from legal scholars, but rather from some combination of entrepreneurship, technological development research and development, and funders. But legal change may also play a role in more directly accelerating the development and deployment of defensive technologies, such as by removing barriers to their adoption, which raise the costs of research or reduce its rewards.[ref 11]

Examples of general questions that might be valuable to explore include:

  1. What are examples of existing policies that unnecessarily hinder research and development into defense-enhancing technologies, such as by (a) raising the costs of conducting that research, or (b) reducing the expected profits of deployment of defense-enhancing technologies?[ref 12] 
  2. What are existing legal or policy barriers that inhibit effective diffusion of defensive technologies across society?[ref 13]
  3. How can the law preferentially[ref 14] accelerate defensive technologies?  

Regulating Internal Deployment

Many existing AI policy proposals regulate AI systems at the point when they are first “deployed”: that is, made available for use by persons external to the developer. However, pre-deployment use of AI models by the developing company—“internal deployment”—may also pose substantial risks.[ref 15] However, most policy proposals aimed at reducing large-scale risks from AI primarily regulate AI at or after the point of external deployment. Policy proposals for regulating internal deployment would therefore be valuable.

Example questions in the workstream might include:

  1. What existing modes of regulation in other AI industries are most analogous to regulation of internal deployment?[ref 16]
  2. How can the state identify which AI developers are appropriate targets for regulation of internal deployment?
  3. How can regulation of internal deployment simultaneously reduce risk and allow for appropriate exploration of model capabilities and risks?
  4. What are the constitutional (e.g., Fourth Amendment) limitations on regulation of internal deployment?
  5. How can regulation of internal deployment be designed to reduce risks of espionage and information leakage?

AI technologies performing legal tasks will likely surface loopholes or gaps in the law: that is, actions permitted by the law but which policymakers would likely prefer to be prohibited. There are several reasons to expect this:

  1. AI itself constitutes a significant technological change, and technological changes often surface loopholes or gaps in the law.[ref 17]
  2. AI might accelerate technological change and economic growth,[ref 18] which will similarly often surface gaps or loopholes in the law.
  3. AI might be more efficient at finding gaps or loopholes in the law, and quickly exploiting them.

Given that lawmaking is a slow and deliberative process, actors can often exploit gaps or loopholes before policymakers can “patch” them. While this dynamic is not new, AI systems may be able to cause more harm or instability by finding or exploiting gaps and loopholes than humans have in the past, due to their greater speed of action, ability to coordinate, dangerous capabilities, and (possibly) lack of internal moral constraints.

This suggests that it may be very valuable for policymakers to “patch” legal gaps and loopholes by quickly enacting new laws. However, constitutional governance is often intentionally slow, deliberative, and decentralized, suggesting that it is unwise and sometimes illegal to accelerate lawmaking in certain ways.

This tension suggests that it would be valuable to research how new legislative and administrative procedures could quickly “patch” legal gaps and loopholes through new law while also complying with the letter and spirit of constitutional limitations on lawmaking.

Responsibly Advancing AI-Enabled Governance

Recent years have seen robust governmental interest in the use of AI technologies for administration and governance.[ref 19] As systems advance in capabilities, this may create significant risks of both misuse,[ref 20] as well as potential safety risks from the deployment of advanced systems in high-stakes governmental infrastructures.

A recent report[ref 21] identifies the dual imperative for governments to:

  1. Quickly adopt AI technology to enhance state capacity, but
  2. Take care when doing so.

The report lays out three types of interventions worth considering:

  1. “‘Win-win’ opportunities” that help with both adoption and safety;[ref 22]
  2. “Risk-reducing interventions”; and
  3. “Adoption-accelerating interventions.”

Designing concrete policies in each of these categories is very valuable, especially policies in the first category, or policies in the second and third category that do not come at the expense of the other category.

As AI systems are able to complete more of the tasks typically associated with traditional legal functions—drafting legislation and regulation, adjudicating,[ref 23] litigating, drafting contracts, counseling clients, negotiating, investigating possible violations of law, generating legal research—it will be natural to consider whether and how these tasks should be automated. 

We can call AI systems performing such functions “AI lawyers.” If implemented well, AI lawyers could help with many of the challenges that AI could bring. AI lawyers could write new laws to regulate governmental development or use of frontier AI, monitor governmental uses of AI, and craft remedies for violations. AI lawyers could also identify gaps and loopholes in the law, accelerate negotiations between lawmakers, and draft legislative “patches” that reflect lawmakers’ consensus. 

However, entrusting ever more power to AI lawyers entails significant risks. If AI lawyers are not themselves law-following, they may abuse their governmental station to the detriment of citizens. If such systems are not intent-aligned,[ref 24] entrusting AI systems with significant governmental power may make it easier for those systems to erode humanity’s control over human affairs. Regardless of whether AI lawyers are aligned, delegating too many legal functions to AI lawyers may frustrate important rule-of-law values, such as democratic responsiveness, intelligibility, and predictability. Furthermore, there are likely certain legal functions that it is important for natural persons to perform, such as serving as a judge on the court of last resort.

Research into the following questions may help humanity navigate the promises and perils of AI lawyers:

  1. Which legal functions should never be automated?
  2. Which legal functions, if entrusted to an AI lawyer, would significantly threaten democratic and rule-of-law values?
  3. How can AI lawyers enhance human autonomy and rule-of-law values?
  4. How can AI lawyers enhance the ability of human governments to respond to challenges from AI?
  5. What substantive safety standards should AI lawyers have to satisfy before being deployed in the human legal system?
  6. Which new legal checks and balances should be introduced if AI lawyers accelerate the speed of legal processes? 

Related to the above, there is also a question of how we can accelerate potential technologies that would defend against general risks to the rule of law and/or democratic accountability. For instance, as lawyers, we may also be particularly well-placed to advance legal reforms that make it easier for citizens to leverage “AI lawyers” to help them defend against vexatious litigation and governmental oppression, or pursue meritorious claims.[ref 25] For example, existing laws regulating the practice of law may impose barriers on citizens’ ability to leverage AI for their legal needs.[ref 26] This suggests further questions, such as:

  1. Who will benefit by default from the widespread availability of cheap AI lawyers?
  2. Will laws regulating the practice of law form a significant barrier to defensive (and other beneficial) applications of AI lawyers?
  3. How should laws regulating the practice of law accommodate the possibility of AI lawyers, especially those that are “defensive” in some sense?
  4. How might access to cheap AI lawyers affect the volume of litigation and pursuit of claims? If there is a significant increase, would this result in a counterproductive effect by slowing down court processing times or prompting the judicial system to embrace technological shortcuts? 

Approval Regulation in a Decentralized World

After the release of GPT-4, a number of authors and policymakers proposed compute-indexed approval regulation, under which frontier AI systems trained with large amounts would be subjected to heightened predeployment scrutiny.[ref 27] Such regulation was perceived as attractive in large part because, under the scaling paradigm that produced GPT-4, development of frontier AI systems depended on the use of a small number of large data centers, which could (in theory) be easily monitored.

However, subsequent technological developments that reduce the amount of centralized compute needed to achieve frontier AI capabilities (namely improvements in decentralized training[ref 28] and the rise of reasoning models)[ref 29] have cast serious doubts on the long-term viability of compute-indexed approval regulation as a method for preventing unapproved development of highly capable AI models.[ref 30] 

It is not clear, however, that these developments mean that other forms of approval regulation for frontier AI development and deployment would be totally ineffective. Many activities are subject to reasonably effective approval regulation notwithstanding their highly distributed nature. For example, people generally respect laws requiring a license to drive a car, hunt, or practice law, even though these activities are very difficult for the government to reliably prevent ex ante. Further research into approval regulation for more decentralized activities could therefore help illuminate whether approval regulation for frontier AI development could remain viable, at an acceptable cost to other values (e.g., privacy, liberty), notwithstanding these developments in the computational landscape.

Examples of possible questions in this workstream could include:

  1. How effective are existing approval regulation regimes for decentralized activities?
  2. Which decentralized activities most resemble frontier AI development under the current computing paradigm?
  3. How do governments create effective approval regulation regimes for decentralized activities, and how might those mechanisms be applied to decentralized frontier AI development?
  4. How can approval regulation of decentralized frontier AI development be implemented at acceptable costs to other values (e.g., privacy, liberty, administrative efficiency)?

Two Byrd Rule problems with the AI moratorium

Note: this commentary was drafted on June 26, 2025, as a memo not intended for publication; we’ve elected to publish it in case the analysis laid out here is useful to policymakers or commentators following ongoing legislative developments regarding the proposed federal moratorium on state AI regulation. The issues noted here are relevant to the latest version of the bill as of 2:50 p.m. ET on June 30, 2025.

Two Byrd Rule issues have emerged, both of which should be fixed. It appears that the Parliamentarian has not ruled on either.

Effects on existing BEAD funding

The Parliamentarian may have already identified the first Byrd Rule issue: the plain text of the AI Moratorium would affect all $42.45 Billion in BEAD funding, not just the newly allocated $500 Million. It is not 100% certain that a court would read the statute this way, but it is the most likely outcome. We analyzed this problem in a recently published commentary. This issue could be fixed via an amendment. 

Private enforcement of the moratorium

In that same article, we flagged a second issue that also presents a Byrd Rule issue: the AI Moratorium seemingly creates private enforcement rights in private parties. That’s a problem under the Byrd Rule, because the AI Moratorium must be a “necessary term or condition” of an outlay. A private enforcement right cannot be characterized as a necessary term or condition of an outlay that does not concern those third parties. This can be fixed by clarifying that the only enforcement mechanism is withdrawal or denial of the new BEAD funding.

The text at issue – private enforcement of the moratorium

The plain text of the moratorium, and applicable legal precedents, likely empower private parties to enforce the moratorium in court. Stripping the provision down to its essentials, subsection (q) states that “no eligible entity or political subdivision thereof . . . may enforce . . . any law or regulation . . . limiting, restricting or otherwise regulating artificial intelligence models, [etc.].” That sounds like prohibition. It doesn’t mention the Department of Commerce. Nor does it leave it to the Secretary’s discretion whether that prohibition applies. If states satisfy the criteria, they likely are prohibited from enforcing AI laws.

Nothing in the proposed moratorium or in 47 U.S.C. § 1702 generally provides that the only remedy for a violation of the moratorium is deobligation of obligated funds by the Assistant Secretary of Commerce for Communications and Information. And when comparable laws—e.g. the Airline Deregulation Act, 49 U.S.C. § 41713—have used similar language to expressly preempt state AI laws, courts have interpreted this as authorizing private parties to sue for an injunction preventing enforcement of preempted state laws. See, for example, Morales v. Trans World Airlines, Inc., 504 U.S. 374 (1992).

What would happen – private lawsuits to enforce the moratorium

Private parties could vindicate this right in one of two ways. First, if a private party (e.g. an AI company) fears that a state will imminently sue it for violating that state’s AI law, the private party could seek a declaratory judgment in federal court. Second, if the state actually sues the private party, that party could raise the moratorium as a defense to that lawsuit. If the private party is based in the same state, that defense would be heard in state court, and could result in dismissal of the state’s claims; if the party is from out-of-state, the claim would be removed to federal court, where a judge could also throw out the state’s claims. 

Why it’s a Byrd Rule problem – private rights are not “terms or conditions”

The AI Moratorium must be a “necessary term or condition” of an outlay. In this case, promising not to enforce AI laws is a valid “term or condition” of the grant. Passively opening oneself up to lawsuits and defenses by private parties is not. Those lawsuits occur far after states take the money, are outside their control, and involve the actions of individuals who are not parties to the grant agreement. They also have significant effects unrelated to spending: binding the actions of states and invalidating laws in ways completely separate from the underlying transaction between the Department of Commerce and the states. It is perfectly compatible with the definition of “terms and conditions” for the Department of Commerce to deobligate funds if the terms of its grant are violated. It is an entirely different thing to create a defense or cause of action for third parties and to allow those parties to interfere with the enforcement power of states. The creation of rights for a third party, uninvolved in the delivery or receipt of an outlay cannot be considered a necessary term or condition.