AI Preemption and “Generally Applicable” Laws
Proposals for federal preemption of state AI laws, such as the moratorium that was removed from the most recent reconciliation bill in June 2025, often include an exception for “generally applicable” laws. Despite the frequency with which this phrase appears in legislative proposals and the important role it plays in the arguments of preemption advocates, however, there is very little agreement among experts as to what exactly “generally applicable” means in the context of AI preemption. Unfortunately, this means that, for any given preemption proposal, it’s often the case that very little can be said for certain about which laws will or will not be exempted.
The most we can say for sure is that the term “generally applicable” is supposed to describe a law that does not single out or target artificial intelligence specifically. Thus, a state law like California’s recently enacted “Transparency in Frontier Artificial Intelligence Act” (SB 53) would likely not be considered “generally applicable” by a court, because it imposes new requirements specifically on AI companies, rather than requirements that apply “generally” and affect AI companies only incidentally if at all.
This basic definition, however, leaves a host of important questions unanswered. What about laws that don’t specifically mention AI, but nevertheless are clearly intended to address issues created by AI systems? Tennessee’s ELVIS Act, which was designed to protect musicians from unauthorized commercial use of their voices, is one example of such a law. It prohibits the reproduction of an artist’s voice by any technological means, but the law was obviously passed in 2024 because recent advances in AI capabilities have made it possible to reproduce celebrity voices more accurately than previously. Alternatively, what about laws which were not originally intended to apply to AI systems, but which happen to place a disproportionate burden on AI systems relative to other technologies? No one knows precisely how a court would resolve the question of whether such laws are “generally applicable,” and if you asked four different people who think about AI preemption for a living you might well get four different answers. If federal preemption legislation is eventually enacted, and if an exception for “generally applicable” laws is included, this question will likely be extensively litigated—and it’s likely that different courts will come to different conclusions.
Usually, the best way to get an idea of how a court will interpret a given phrase is to look at how courts have interpreted the same phrase in similar contexts in the past. However, while there is some existing case law discussing the meaning of “generally applicable” in the context of preemption, LawAI’s research hasn’t turned up any cases that shed a great deal of light on the question of what the term would mean in the specific context of AI preemption. It’s therefore likely that we won’t have a clear idea of what “generally applicable” really means until some years from now, when courts may (or may not) have had occasion to answer the question with respect to a variety of different arguably “generally applicable” state laws.
Last updated: December 11, 2025, at 4:19 p.m. Eastern Time
The Genesis Mission Executive Order: What It Does and How it Shapes the Future of AI-Enabled Scientific Research
Summary
- The Genesis Mission EO seeks to build a federal AI-enabled science platform by directing the Department of Energy to organize, plan, and begin assembling technical needs like computing resources, AI models, and data.
- DOE will identify 20+ scientific and technology challenges, map federal compute and data resources, and demonstrate an initial capability using existing infrastructure.
- As with many EOs, the order assigns and coordinates responsibilities, but cannot itself provide new funding or legal authority, so realizing the Genesis Mission’s full vision will depend on Congress, other agencies, and private sector partners.
- The effort plays to DOE’s strengths—the national labs, high-performance computing, diverse scientific capabilities—and gives the federal government an opportunity to build capacity to understand and govern advancing AI-enabled science.
- The EO creates a timely opportunity to align federal policy with the Genesis Mission and modernize oversight for potential dual-use concerns as AI, large datasets, and automated labs accelerate scientific research.
On November 24, the White House released an Executive Order launching the Genesis Mission—a bold plan to build a unified national AI-enabled science platform linking federal supercomputers, secure cloud networks, public and proprietary datasets, scientific foundation models, and even automated laboratory systems. The Administration frames the Genesis Mission as a Manhattan Project-scale scientific effort.
The EO lays out the organizational and planning framework for the Genesis Mission and tasks the Department of Energy with assembling the resources required to launch it. Working in highly consequential scientific domains—such as biotechnology, where dual-use safety and security issues routinely arise—gives the federal government a timely opportunity to build the oversight and governance capacity that will be needed as AI-enabled science advances.
1. What the EO Actually Does
The EO directs the DOE and White House Office of Science and Technology Policy (OSTP) to spend the next year defining the scope of the Genesis Mission and proving what can be done using existing authority and appropriations. It’s important to keep in mind that an Executive Order cannot itself create new funding or new legal authority, so future steps will depend on Congressional action.
Mandated near-term tasks include:
- Identify at least twenty “science and technology challenges of national importance” that must span priority domains such as biotechnology, advanced manufacturing, critical materials, quantum computing, nuclear science, and semiconductors. DOE will start, and OSTP will expand and finalize the list.
- Inventory all relevant federal resources, including computing, data, networking, and automated experimentation capabilities.
- Define initial datasets and AI models and develop a plan with “risk-based cybersecurity measures” that will enable incorporating data from federally funded research, other agencies, academia, and approved private sector partners.
- Produce an initial demonstration of the “American Science and Security Platform,” using only currently available tools and legal authorities.
These are primarily coordination and planning tasks aimed at defining the scope of an integrated AI science platform and demonstrating what can be done with existing resources within DOE. DOE’s activities set forth in the EO appear to align with Section 50404 of the OBBBA reconciliation bill (H.R. 1), which appropriates $150 million through September 2026 to DOE for work on “transformational artificial intelligence models.” Although not referenced in the EO, Section 50404 directs DOE to develop a public-private infrastructure to curate large scientific datasets and create “self-improving” AI models with applications such as more efficient chip design and new energy technologies. DOE’s Section 50404 appropriation is the subject of an ongoing Request for Information (RFI), in which DOE is seeking input on how to structure and implement such public-private research consortia.
The EO does not itself mandate building the full system beyond DOE. Rather, these steps begin the process of assembling underlying infrastructure. The EO outlines broad interagency coordination, but key details need to be worked out, including who can access the platform, how users will be vetted, and whether it will be open to broad scientific use or limited to national security-priority domains.
In that sense, the EO is best understood as establishing the groundwork for a future AI-enabled and automated science infrastructure—while its full build-out will depend on Congress, other agencies, and private sector partnerships.
2. Who Holds the Pen
The Genesis Mission envisions centralized leadership for interagency coordination, with two primary actors:
- DOE: Responsible for identifying and assembling the technical components: supercomputers, datasets, models, automated labs.
- OSTP: Responsible for government-wide coordination through the National Science and Technology Council.
Technical leadership will likely sit with Under Secretary for Science Darío Gil, who oversees the DOE national labs and major research programs. Strategic coordination, including interactions with other agencies and industry, will likely run through Michael Kratsios, OSTP Director and Presidential Science Advisor.
The EO directs only DOE to take specific actions. What this means is that ultimately the interagency coordination is more aspirational, and likely will depend on Congressional actions to add or redirect funding to work on the Genesis Mission. At this point, the EO envisions DOE as the primary operator of the ultimate platform with OSTP shaping strategy. The practical impact of the Mission will largely depend on how these resources are ultimately shared and made accessible across agencies, which the EO leaves open for now.
3. The Goal: Accelerating High-Stakes Science
Here’s where the Genesis Mission may be most consequential. The EO envisions a platform that sits at the center of scientific domains with national and economic significance. These are areas where integrating AI models, different kinds of information from government and private databases, and being able to run lots of experiments using automation can provide high leverage.
For example, in biological research, an integrated AI-science platform could accelerate drug development, improve biomanufacturing, strengthen pandemic preparedness, tackle chronic disease, and support emerging industries that can help economic growth and allow the United States to maintain global leadership. DOE is well positioned to contribute here, given its national laboratories, high-performance computing, and experience managing large-scale scientific infrastructure.
The Genesis Mission EO suggests that the Administration expects the Mission to support research with high scientific value as well as complex security and safety considerations. While it doesn’t reference new or existing regulations, the EO requires DOE to operate the platform consistent with:
- classification rules,
- supply-chain security requirements,
- federal cybersecurity standards, and
- “uniform and stringent” data-access and data-management processes with strong vetting for external users.
A system that integrates large biological datasets, frontier-scale foundation models, and automated lab workflows could dramatically accelerate discovery. It’s important to keep in mind, however, that such capabilities can also intersect with longstanding dual-use concerns: areas where the same tools that advance beneficial research might also lower barriers to potential harms.
4. Why Governance Matters for the Genesis Mission
Biology offers a clear example of the kinds of oversight challenges that can arise as AI accelerates scientific research. AI and lab automation can lower barriers to manipulating or enhancing dangerous pathogens, which is often referred to as “gain-of-function” research.
Importantly, the launch of the Genesis Mission comes while key federal biosafety revisions are still in progress. In May, the White House issued Executive Order 14292, “Improving the Safety and Security of Biological Research.” That EO called for strengthening oversight of certain high-consequence biological research, including gain of function. It imposed several tasks on OSTP, including:
- Revise or replace the 2024 Framework for Nucleic Acid Synthesis Screening.
- Replace the withdrawn 2024 dual-use and enhanced-pathogen oversight policy.
- Develop a strategy to “govern, limit, and track” gain-of-function and dual-use biological research outside federally funded environments.
Since then, there has been partial progress towards these goals, including NIH and USDA funding bans on gain-of-function research. But several other updates called for in EO 14292 have not been finalized. The Genesis Mission creates both an opportunity and a need to advance this work. By accelerating AI-enabled scientific research, the Mission heightens the importance of clear, modernized biosafety and biosecurity guidance—and gives the Administration a natural venue to advance it.
As DOE begins integrating advanced computation, large biological datasets, and automated experimentation, it becomes even more valuable to clarify how federal guidance should apply to AI-augmented research. The Genesis Mission may ultimately help spur the release of updated oversight frameworks and encourage broader policy discussions—including potential legislation—on how to manage dual-use research in the era of integrated AI for science platforms.
These issues aren’t limited to biology either. The Genesis Mission EO names nuclear science, quantum computing, advanced materials, and other domains where AI-accelerated discovery creates both major opportunities and critical governance issues.
5. The Hard Policy Questions Ahead
At first, the Genesis Mission will likely be a largely DOE-run effort limited to federal researchers and a small group of partners. But if it grows along the ambitious lines the EO lays out, managing who can access it—and how—becomes far more challenging. Once integrated AI-driven systems can design, optimize, or automate significant parts of scientific research, regulation becomes both urgent and harder to enforce in a uniform way:
- Federal rules can clearly govern federally funded research.
- But what about private-sector scientists who combine federal AI models or datasets with independently conducted wet-lab work?
- And what about academic or corporate AI-scientific platforms—built outside federal systems—that also integrate scientific data, advanced models, and automated labs?
These private and academic systems may be entirely outside federal oversight, complicating attempts to build coherent guardrails.
If the Genesis Mission succeeds, it will generate substantial new scientific data that will help train more capable models and enable new research pathways. At the same time, access to more powerful models and broader datasets will increase the importance of developing effective policies for data governance, user access, and managing research across the government and with the private sector.
6. Bottom Line
The Genesis Mission sets an ambitious vision for a unified AI-enabled science platform within the federal government. Its success will depend on future funding, interagency participation, and sustained follow-through. But even at this early planning stage, the EO brings core policy issues to the surface: oversight, data governance, access rules, and how to manage research that cuts across agencies and private sector entities.
As DOE and OSTP begin work on the Genesis Mission, it also creates a timely opportunity for the federal government to update dual-use oversight frameworks, such as in biosafety as called for by EO 14292, and build governance structures needed for AI-accelerated science.
Legal Issues Raised by the Proposed Executive Order on AI Preemption
On November 19, 2025, a draft executive order that the Trump administration may issue as early as Friday, November 21 was publicly leaked. The six-page order consists of nine sections, including prefatory purpose and policy statements, a section containing miscellaneous “general provisions,” and six substantive provisions. This commentary provides a brief overview of some of the most important legal issues raised by the draft executive order (DEO). This commentary is not intended to be comprehensive, and LawAI may publish additional commentaries and/or updates as events progress and additional legal issues come to light.
As an initial matter, it’s important to understand what an executive order is and what legal effect executive orders have in the United States. An executive order is not a congressionally enacted statute or “law.” While Congress undoubtedly has the authority to preempt some state AI laws by passing legislation, the President generally cannot unilaterally preempt state laws by presidential fiat (nor does the DEO purport to do so). An executive order can publicly announce the policy goals of the executive branch of the federal government, and can also contain directives from the President to executive branch officials and agencies.
Issue 1: The Litigation Task Force
The DEO’s first substantive section, § 3, would instruct the U.S. Attorney General to “establish an AI Litigation Task Force” charged with bringing lawsuits in federal court to challenge allegedly unlawful state AI laws. The DEO suggests that the Task Force will challenge state laws that allegedly violate the dormant commerce clause and state laws that are allegedly preempted by existing federal regulations. The Task Force is also authorized to challenge state AI laws under any other legal basis that the Department of Justice (DOJ) can identify.
Dormant commerce clause arguments
Presumably, the DEO’s reference to the commerce clause refers to the dormant commerce clause argument laid out by Andreessen Horowitz in September 2025. This argument, which a number of commentators have raised in recent months, suggests that certain state AI laws violate the commerce clause of the U.S. Constitution because they impose excessive burdens on interstate commerce. LawAI’s analysis indicates that this commerce clause argument, at least with respect to the state laws specifically referred to in the DEO, is legally meritless and unlikely to succeed in court. We intend to publish a more thorough analysis of this issue in the coming weeks in addition to the overview included here.
In 2023, the Supreme Court issued an important dormant commerce clause opinion in the case of National Pork Producers Council v. Ross. The thrust of the majority opinion in that case, authored by Justice Gorsuch, is that state laws generally do not violate the dormant commerce clause unless they involve purposeful discrimination against out-of-state economic interests in order to favor in-state economic interests.
Even proponents of this dormant commerce clause argument typically acknowledge that the state AI laws they are concerned with generally do not discriminate against out-of-state economic interests. Therefore, they often ignore Ross, or cite the dissenting opinions while ignoring the majority. Their preferred precedent is Pike v. Bruce Church, Inc., a 1970 case in which the Supreme Court held that a state law with “only incidental” effects on interstate commerce does not violate the dormant commerce clause unless “the burden imposed on such commerce is clearly excessive in relation to the putative local benefits.” This standard opens the door for potential challenges to nondiscriminatory laws that arguably impose a “clearly excessive” burden on interstate commerce.
The state regulation that was invalidated in Pike would have required cantaloupes grown in Arizona to be packed and processed in Arizona as well. The only state interest at stake was the “protect[ion] and enhance[ment] of [cantaloupe] growers within the state.” The Court in Pike specifically acknowledged that “[w]e are not, then, dealing here with state legislation in the field of safety where the propriety of local regulation has long been recognized.”
Even under Pike, then, it’s hard to come up with a plausible argument for invalidating the state AI laws that preemption advocates are concerned with. Andreessen Horowitz’s argument is that the state proposals in question, such as New York’s RAISE Act, “purport to have significant safety benefits for their residents,” but in fact “are unlikely” to provide substantial safety benefits. But this is, transparently, a policy judgment, and one with which the state legislature of New York evidently disagrees. As Justice Gorsuch observes in Ross, “policy choices like these usually belong to the people and their elected representatives. They are entitled to weigh the relevant ‘political and economic’ costs and benefits for themselves, and ‘try novel social and economic experiments’ if they wish.” New York voters overwhelmingly support the RAISE Act, as did an overwhelming majority of New York’s state legislature when the bill was put to a vote. In my opinion, it is unlikely that any federal court will presume to override those policy judgments and substitute its own.
That said, it is possible to imagine a state AI law that would violate the dormant commerce clause. For example, a law that placed burdensome requirements on out-of-state developers while exempting in-state developers, in order to grant an advantage to in-state AI companies, would likely be unconstitutional. Since I haven’t reviewed every state AI bill that has been or will be proposed, I can’t say for sure that none of them would violate the dormant commerce clause. It is entirely possible that the Task Force will succeed in invalidating one or more state laws via a dormant commerce clause challenge. It does seem relatively safe, however, to predict that the specific laws referred to in the executive order and the state frontier AI safety laws most commonly referenced in discussions of preemption would likely survive any dormant commerce clause challenges brought against them.
State laws preempted by existing federal regulations
Section 3 of the DEO also specifically indicates that the AI Litigation Task Force will challenge state laws that “are preempted by existing Federal regulations.” It is possible for state laws to be preempted by federal regulations, and, as with the commerce clause issue discussed above, it’s possible that the Task Force will eventually succeed in invalidating some state laws by arguing that they are so preempted.
In the absence of significant new federal AI regulation, however, it is doubtful whether many of the state laws the DEO is intended to target will be vulnerable to this kind of legal challenge. Moreover, any state AI law that created significant compliance costs for companies and was plausibly preempted by existing federal regulations could be challenged by the affected companies, without the need for DOJ intervention. The fact that (to the best of my knowledge) no such lawsuit has yet been filed challenging the most notable state AI laws indicates that the new Task Force will likely be faced with slim pickings, at least until new federal regulations are enacted and/or state regulation of AI intensifies.
It seems likely that § 3’s reference to preemption via existing federal regulation is at least partially intended to refer to Communications Act preemption as discussed in the AI Action Plan. There is a major obstacle to preempting state AI laws under the Communications Act, however: the Communications Act provides the FCC (and sometimes courts) with some authority to preempt certain state laws regulating “telecommunications services” and “information services,” but existing legal precedents clearly establish that AI systems are neither “telecommunications services” nor “information services” under the Communications Act. In his comprehensive policy paper on FCC preemption of state AI laws, Lawrence J. Spiwak (a staunch supporter of preemption) analyzes the relevant precedents and concludes that “given the plain language of the Communications Act as well as the present state of the caselaw, it is highly unlikely the FCC will succeed in [AI preemption] efforts” and that “trying to contort the Communications Act to preempt the growing patchwork of disparate state AI laws is a Quixotic exercise in futility.” Harold Feld of Public Knowledge essentially agrees with this assessment in his piece on the same topic.
Alternative grounds
Section 3 also authorizes the Task Force to challenge state AI laws that are “otherwise unlawful” in the Attorney General’s judgment. The Department of Justice employs a great number of smart and creative lawyers, so it’s impossible to say for sure what theories they might come up with to challenge state AI laws. That said, preemption of state AI laws has been a hot topic for months now, and the best theories that have been publicly floated for preemption by executive action are the dormant commerce clause and Communications Act theories discussed above. This is, it seems fair to say, a bearish indicator, and I would be somewhat surprised if the Task Force managed to come up with a slam dunk legal argument for broad-based preemption that has hitherto been overlooked by everyone who’s considered this issue.
Issue 2: Restrictions on State Funding
Section 5 of the DEO contains two subsections that concern efforts to withhold federal grant funding from states that attempt to regulate AI. Subsection (a) indicates that Commerce will attempt to withhold non-deployment Broadband Equity Access and Deployment (BEAD) funding “to the maximum extent allowed by federal law” from states with AI laws listed pursuant to § 4 of the DEO, which instructs the Department of Commerce to identify state AI laws that conflict with the policy directives laid out in § 1 of the DEO. Subsection (b) instructs all federal agencies to assess their discretionary grant programs and determine whether existing or future grants can be withheld from states with AI laws that are challenged under § 3 or identified as undesirable pursuant to § 4.
In my view, § 5 of the DEO is the provision with the most potential to affect state AI legislation. While § 5 does not contain any attempt to actually preempt state AI laws, the threat of losing federal grant funds could have the practical effect of incentivizing some states to abandon their AI-related legislative efforts. And, as Daniel Cochrane and Jack Fitzhenry pointed out during the reconciliation moratorium fight, “Smaller conservative states with limited budgets and large rural populations need [BEAD] funds. But wealthy progressive states like California and New York can afford to take a pass and just keep enforcing their tech laws.” While politicians in deep blue states will be politically incentivized to fight the Trump administration’s attempts to preempt overwhelmingly popular AI laws even if it means losing access to some federal funds, politicians in red states may instead be incentivized to avoid conflict with the administration.
Section 5(a): Non-deployment BEAD funding
Section 5(a) of the DEO is easier to analyze than § 5(b), because it clearly identifies the funds that are in jeopardy—non-deployment BEAD funding. The BEAD program is a $42.45 billion federal grant program established by Congress in 2021 for the purpose of facilitating access to reliable, high-speed broadband internet for communities throughout the U.S. A portion of the $42.45 billion total was allocated to each of 56 states and territories in June 2023 by the National Telecommunications and Information Administration (NTIA). In June 2025, the NTIA announced a restructuring of the BEAD program that eliminated many Biden-era requirements and rescinded NTIA approval for all “non-deployment” BEAD funding, i.e., BEAD funding that states intended to spend on uses other than actually building broadband infrastructure. The total amount of BEAD funding that will ultimately be classified as “non-deployment” is estimated to be more than $21 billion.
BEAD funding was previously used as a carrot and stick for AI preemption in June 2025, as part of the effort to insert a moratorium or “temporary pause” on state AI regulation into the most recent reconciliation bill. There are two critical differences between the attempted use of BEAD funding in the reconciliation process and its use in the DEO, however. First, the DEO is, obviously, an executive order rather than a legislative enactment. This matters because agency actions that would be perfectly legitimate if authorized by statute can be illegal if undertaken without statutory authorization. And secondly, while the final drafts of the reconciliation moratorium would only have jeopardized BEAD funding belonging to states that chose to accept a portion of $500 million in additional BEAD funding that the reconciliation bill would have appropriated, the DEO would jeopardize non-deployment BEAD funding belonging to any state that attempts to regulate AI in a manner deemed undesirable under the DEO.
The multibillion-dollar question here is: can the administration legally withhold BEAD funding from states because those states enact or enforce laws regulating AI? I am going to cop out and say, honestly, that I don’t know for certain at this point in time. There are a number of potential legal issues with the course of action that the DEO contemplates, but as of November 20, 2025 (one day after the DEO first leaked) no one has published a definitive analysis of whether the administration will be able to overcome these obstacles.
The Trump administration’s Department of Transportation (DOT) recently attempted a maneuver similar to the one contemplated in the DEO when, in response to an executive order directing agencies to “undertake any lawful actions to ensure that so-called ‘sanctuary’ jurisdictions… do not receive access to federal funds,” the DOT attempted to add conditions to all DOT grant agreements requiring grant recipients to cooperate in the enforcement of federal immigration law. Affected states promptly sued to challenge the addition of this grant condition and successfully secured a preliminary injunction prohibiting DOT from implementing or enforcing the conditions. In early November 2025, the federal District Court for the District of Rhode Island ruled that the challenged conditions were unlawful for three separate reasons: (1) imposing the conditions exceeded the DOT’s statutory authority under the laws establishing the relevant grant programs; (2) imposing the conditions was “arbitrary and capricious,” in violation of the Administrative Procedure Act; and (3) imposing the conditions violated the Spending Clause of the U.S. Constitution. It remains to be seen whether the district court’s ruling will be upheld by a federal appellate court and/or by the U.S. Supreme Court.
Suppose that, in the future, the Department of Commerce decides to withhold non-deployment BEAD funding from states with AI laws deemed undesirable under the DEO. States could challenge this decision in court and ask the court to order NTIA to release the previously allocated non-deployment funds to the states, arguing that the withholding of funds exceeded NTIA’s authority under the statute authorizing BEAD, violated the APA, and violated the Spending Clause. Each of these arguments seems at least somewhat plausible, on an initial analysis. Nothing in the statute authorizing BEAD appears to give the federal government unlimited discretion to withhold BEAD funds to vindicate policy goals that have little or nothing to do with access to broadband; rescinding previously awarded grant funds and then withholding them in order to further goals not contemplated by Congress is at least arguably arbitrary and capricious; and the course of action proposed in the DEO is, arguably, impermissibly coercive in violation of the Spending Clause.
AI regulation is a less politically divisive issue than immigration enforcement, and a cynical observer might assume that this would give states in this hypothetical AI case a better chance on appeal than the states in the DOT immigration conditions case discussed above. However, there are a number of differences between the DOT conditions case and the course of action contemplated in the DEO that could make it harder—or easier—for states to prevail in court. Accurately estimating states’ chances of success with high confidence will take more than one day’s worth of analysis.
It should also be noted that, regardless of whether or not states could eventually prevail in a hypothetical lawsuit, the prospect of having BEAD funding denied or delayed, perhaps for years, could be enough to discourage some states from enacting AI legislation of a type disfavored by the Department of Commerce under the DEO.
Section 5(b): Other discretionary agency funding
In addition to withholding non-deployment BEAD funding, the DEO would instruct agencies throughout the executive branch to “take immediate steps to assess their discretionary grant programs and determine whether agencies may condition such grants on States either not enacting an AI law that conflicts with the policy of this order… or, for those States that have enacted such laws, on those States entering into a binding agreement with the relevant agency not to enforce any such laws during any year in which it receives the discretionary funding.”
The legality of this contemplated course of action, and its likelihood of being upheld in court, is even more difficult to conclusively determine ex ante than the legality and prospects of the BEAD withholding discussed above. The federal government distributes about a trillion dollars a year in grants to state and local governments, and more than a quarter of that money is in the form of discretionary grants (as opposed to grants from mandatory programs such as Medicaid). That’s a lot of money, and it’s broken up into a lot of different discretionary grants. It’s likely that many of the arguments against withholding grant money from AI-regulating states would be the same from one grant to another. However, it is also likely that there are some discretionary grants to states which could more reasonably be conditioned on compliance with the President’s deregulatory AI policy directives and other grants for which such conditioning would be less reasonable. Ultimately, further research into this issue is needed to determine how much state grant funding, if any, is legitimately at risk.
Issue 3: Federal Reporting and Disclosure
Section 6 of the DEO instructs the FCC, in consultation with AI czar David Sacks, to “initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” Presumably, “conflicting state laws” is intended to refer to state AI transparency laws such as California’s SB 53 and New York’s RAISE Act. It’s not clear from the language of the DEO what legal authority this “Federal reporting and disclosure standard” would be promulgated under. Under the Biden administration, the Department of Commerce’s Bureau of Industry and Security (BIS) attempted to impose reporting requirements on frontier model developers under the information-gathering authority provided by § 705 of the Defense Production Act—but § 705 has historically been used by BIS rather than the FCC, and I am not aware of any comparable authority that would authorize the FCC to implement a mandatory “federal reporting and disclosure standard” for AI models.
Generally, regulatory preemption can only occur when Congress has granted an executive-branch agency authority to promulgate regulations and preempt state laws inconsistent with those regulations. This authority can be granted expressly or by implication, but, as discussed above in the discussion of Communications Act preemption under § 3 of the DEO, the FCC has never before asserted that it possesses any significant regulatory authority (express or otherwise) over any aspect of AI development. It’s possible that the FCC is relying on a creative interpretation of its authority under the Communications Act—FCC Chairman Brendan Carr previously indicated that the FCC was “taking a look” at whether the Communications Act grants the FCC authority to regulate AI and preempt onerous state laws. However, as discussed above, legal commentators almost universally agree that “[n]othing in the Communications Act confers FCC authority to regulate AI.”
It’s possible that the language of the EO is simply meant to indicate that the FCC and Sacks will suggest a standard that may then be enacted into law by Congress. This would certainly overcome the legal obstacles discussed above, and could (depending on the language of the statute) allow for preemption of state AI transparency laws. However, it would require passing new federal legislation, which is easier said than done.
Issue 4: Preemption of state laws for “deceptive practices” under the FTC Act
Section 7 of the DEO directs the Federal Trade Commission (FTC) to issue a policy statement arguing that certain state AI laws are preempted by the FTC Act’s prohibition on deceptive commercial practices. Presumably, the laws which the DEO intends for this guidance to target include Colorado’s AI Act, which the DEO’s Purpose section accuses of “forc[ing] AI models to embed DEI in their programming, and to produce false results in order to avoid a ‘differential treatment or impact’…” on enumerated demographic groups, and other similar “algorithmic discrimination” laws. A policy statement on its own generally cannot preempt state laws, but it seems likely that the policy statement that the DEO instructs the FTC to create would be relied upon in subsequent preemption-related regulatory efforts and/or by litigants seeking to prevent enforcement of the allegedly preempted laws in court.
While the Trump administration has previously expressed disapproval of “woke” AI development practices, for example in the recent executive order on “Preventing Woke AI in the Federal Government,” this argument that the FTC Act’s prohibition on UDAP (unfair or deceptive acts or practices in or affecting commerce) preempts state algorithmic discrimination laws is, as far as I am aware, new. During the Biden administration, Lina Khan’s FTC published guidance containing an arguably similar assertion: that the “sale or use of—for example—racially biased algorithms” would be an unfair or deceptive practice under the FTC Act. Khan’s FTC did not, however, attempt to use this aggressive interpretation of the FTC Act as a basis for FTC preemption of any state laws.
Colorado’s AI statute has been widely criticized, including by Governor Jared Polis (who signed the act into law) and other prominent Colorado politicians. In fact, the law has proven so problematic for Colorado that Governor Polis, a Democrat, was willing to cross party lines in order to support broad-based preemption of state AI laws for the sake of getting rid of Colorado’s. Therefore, an attempt by the Trump administration to preempt Colorado’s law (or portions thereof) might meet with relatively little opposition from within Colorado. It’s not clear who, if anyone, would have standing to challenge FTC preemption of Colorado’s law if Colorado’s attorney general refused to do so. But Colorado is not the only state with a law prohibiting algorithmic discrimination, and presumably the guidance the DEO instructs the FTC to produce would inform attempts to preempt other “woke” state AI laws as well as Colorado’s.
The question of how those attempts would fare in federal court is an interesting one, and I look forward to reading analysis of the issue from commentators with expertise regarding the FTC Act and algorithmic discrimination laws. Unfortunately, I am not such a commentator and will therefore plead ignorance on this point.
Ten Highlights of the White House’s AI Action Plan
Today, the White House released its AI Action Plan, laying out the administration’s priorities for AI innovation, infrastructure, and adoption. Ultimately, the value of the Plan will depend on how it is operationalized via executive orders and the actions of executive branch agencies, but the Plan itself contains a number of promising policy recommendations. We’re particularly excited about:
- The section on federal government evaluations of national security risks in frontier models. This section correctly identifies the possibility that “the most powerful AI systems may pose novel national security risks in the near future,” potentially including risks from cyberattacks and risks related to the development of chemical, biological, radiological, nuclear, or explosive (CBRNE) weapons. Ensuring that the federal government has the personnel, expertise, and authorities needed to guard against these risks should be a bipartisan priority.
- The discussion of interpretability and control, which recognizes the importance of interpretability to the use of advanced AI systems in national security and defense applications. The Plan also recommends three policy actions for advancing the science of interpretability, each of which seems useful for frontier AI security in expectation.
- The overall focus on standard-setting by the Center for AI Standards and Innovation (CAISI, formerly known as the AI Safety Institute) and other government agencies, in partnership with industry, academia, and civil society organizations.
- The recommendation on building an AI evaluations ecosystem. The science of evaluating AI systems’ capabilities is still in its infancy, but the Plan identifies a few promising ways for CAISI and other government agencies to support the development of this critical field.
- The emphasis on physical and cybersecurity for frontier labs and bolstering critical infrastructure cybersecurity. As Leopold Aschenbrenner pointed out in “Situational Awareness,” AI labs are not currently equipped to protect their model weights and algorithmic secrets from being stolen by China or other geopolitical rivals of the U.S., and fixing this problem is a crucial national security imperative.
- The call to improve the government’s capacity for AI incident response. Advanced planning and capacity-building are crucial for ensuring that the government is prepared to respond in the event of an AI emergency. Incident response preparation is an effective way to increase resiliency without directly burdening innovation.
- The section on how the legal system should handle deceptive AI-generated “evidence.” Legal rules often lag behind technological development, and the guidance contemplated here could be highly useful to courts that might otherwise be unprepared to handle an influx of unprecedentedly convincing fake evidence.
- The recommendations for ramping up export control enforcement and plugging loopholes in existing semiconductor export controls. Compute governance—preventing geopolitical rivals from gaining access to the chips needed to train cutting-edge frontier AI models—continues to be an effective policy tool for maintaining the U.S.’s lead in the race to develop advanced AI systems before China.
- The suggested regulatory sandboxes, which could enable AI adoption and increase the AI governance capacity of sectoral regulatory agencies like the FDA and the SEC.
- The section on deregulation wisely rejects the maximalist position of the moratorium that was stripped from the recent reconciliation bill by a 99-1 Senate vote. Instead of proposing overbroad and premature preemption of virtually all state AI regulations, the Plan recommends that AI-related federal funding should not “be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.”
- At the moment, it’s hard to identify any significant source of “AI-related federal funding” to states, although this could change in the future. This being the case, it will likely be difficult for the federal government to offer states any significant inducement towards deregulation unless it first offers them new federal money. And disincentivizing truly “burdensome” state regulations that would interfere with the effectiveness of federal grants seems like a sensible alternative to broader forms of preemption.
- The Plan also seems to suggest that the FCC could preempt some state AI regulations under § 253 of the Communications Act of 1934. It remains to be seen whether and to what extent this kind of preemption is legally possible. At first glance, however, it seems unlikely that the FCC’s authority to regulate telecommunications services could legally be used for any especially broad preemption of state AI laws. Any broad FCC preemption under this authority would likely have to go through notice and comment procedures and might struggle to overcome legal challenges from affected states.
Two Byrd Rule problems with the AI moratorium
Note: this commentary was drafted on June 26, 2025, as a memo not intended for publication; we’ve elected to publish it in case the analysis laid out here is useful to policymakers or commentators following ongoing legislative developments regarding the proposed federal moratorium on state AI regulation. The issues noted here are relevant to the latest version of the bill as of 2:50 p.m. ET on June 30, 2025.
Two Byrd Rule issues have emerged, both of which should be fixed. It appears that the Parliamentarian has not ruled on either.
Effects on existing BEAD funding
The Parliamentarian may have already identified the first Byrd Rule issue: the plain text of the AI Moratorium would affect all $42.45 Billion in BEAD funding, not just the newly allocated $500 Million. It is not 100% certain that a court would read the statute this way, but it is the most likely outcome. We analyzed this problem in a recently published commentary. This issue could be fixed via an amendment.
Private enforcement of the moratorium
In that same article, we flagged a second issue that also presents a Byrd Rule issue: the AI Moratorium seemingly creates private enforcement rights in private parties. That’s a problem under the Byrd Rule, because the AI Moratorium must be a “necessary term or condition” of an outlay. A private enforcement right cannot be characterized as a necessary term or condition of an outlay that does not concern those third parties. This can be fixed by clarifying that the only enforcement mechanism is withdrawal or denial of the new BEAD funding.
The text at issue – private enforcement of the moratorium
The plain text of the moratorium, and applicable legal precedents, likely empower private parties to enforce the moratorium in court. Stripping the provision down to its essentials, subsection (q) states that “no eligible entity or political subdivision thereof . . . may enforce . . . any law or regulation . . . limiting, restricting or otherwise regulating artificial intelligence models, [etc.].” That sounds like prohibition. It doesn’t mention the Department of Commerce. Nor does it leave it to the Secretary’s discretion whether that prohibition applies. If states satisfy the criteria, they likely are prohibited from enforcing AI laws.
Nothing in the proposed moratorium or in 47 U.S.C. § 1702 generally provides that the only remedy for a violation of the moratorium is deobligation of obligated funds by the Assistant Secretary of Commerce for Communications and Information. And when comparable laws—e.g. the Airline Deregulation Act, 49 U.S.C. § 41713—have used similar language to expressly preempt state AI laws, courts have interpreted this as authorizing private parties to sue for an injunction preventing enforcement of preempted state laws. See, for example, Morales v. Trans World Airlines, Inc., 504 U.S. 374 (1992).
What would happen – private lawsuits to enforce the moratorium
Private parties could vindicate this right in one of two ways. First, if a private party (e.g. an AI company) fears that a state will imminently sue it for violating that state’s AI law, the private party could seek a declaratory judgment in federal court. Second, if the state actually sues the private party, that party could raise the moratorium as a defense to that lawsuit. If the private party is based in the same state, that defense would be heard in state court, and could result in dismissal of the state’s claims; if the party is from out-of-state, the claim would be removed to federal court, where a judge could also throw out the state’s claims.
Why it’s a Byrd Rule problem – private rights are not “terms or conditions”
The AI Moratorium must be a “necessary term or condition” of an outlay. In this case, promising not to enforce AI laws is a valid “term or condition” of the grant. Passively opening oneself up to lawsuits and defenses by private parties is not. Those lawsuits occur far after states take the money, are outside their control, and involve the actions of individuals who are not parties to the grant agreement. They also have significant effects unrelated to spending: binding the actions of states and invalidating laws in ways completely separate from the underlying transaction between the Department of Commerce and the states. It is perfectly compatible with the definition of “terms and conditions” for the Department of Commerce to deobligate funds if the terms of its grant are violated. It is an entirely different thing to create a defense or cause of action for third parties and to allow those parties to interfere with the enforcement power of states. The creation of rights for a third party, uninvolved in the delivery or receipt of an outlay cannot be considered a necessary term or condition.
The AI moratorium—the Blackburn amendment and new requirements for “generally applicable” laws
Published: 9:55 pm ET on June 29, 2025
Last updated: 10:28 pm ET on June 29, 2025
The latest version of the AI moratorium has been released, with some changes to the “rule of construction.” We’ve published two prior commentaries on the moratorium (both of which are still relevant, because the updated text has not addressed the issues noted in either). The new text:
- Shortens the “temporary pause” from 10 to 5 years;
- Attempts to exempt laws addressing CSAM, childrens’ online safety, and rights to name/likeness/voice/image—although the amendment seemingly fails to protect the laws its drafters intend to exempt; and
- Creates a new requirement that laws do not create an “undue or disproportionate burden,” which is likely to generate significant litigation.
The amendment tries to protect state laws on child sexual abuse materials and recording artists, but likely fails to do so.
The latest text appears to be drafted specifically to address the concerns of Senator Marsha Blackburn, who does not want the moratorium to apply to state laws affecting recording artists (like Tennessee’s ELVIS Act) and laws affecting child sexual abuse material (CSAM). But while the amended text lists each of these categories of laws as specific examples of “generally applicable” laws or regulations, the new text only exempts those laws if they do not impose an “undue or disproportionate burden” on AI models, systems, or “algorithmic decision systems,” as defined in the moratorium, in order to “reasonably effectuate the broader underlying purposes of the law or regulation.”
However, laws like the ELVIS Act likely have a disproportionate burden on AI systems. They almost exclusively target AI systems and outputs, and the effect of the law will almost entirely be borne by AI companies. While trailing qualifiers always vex courts, the fact that “undue or disproportionate burden” is separated from the preceding list by a comma strongly suggests that it qualifies the entire list and not just “common law.” Common sense also counsels in favor of this reading: it’s unlikely that an inherently general body of law (like common law) would place a disproportionate burden on AI, while legislation like the ELVIS act absolutely could (and likely does). As we read the new text, the most likely outcome is that the laws Senator Blackburn wants to protect would not be protected.
Even if other readings are possible, this “disproportionate” language would almost certainly create litigation if enacted, with companies challenging whether the ELVIS Act and CSAM laws are actually exempted. As we have previously noted, the moratorium will likely be privately enforceable—meaning that any company or individual against whom a state attempts to enforce a state law or regulation will be able to sue to prevent enforcement.
The newly added “undue or disproportionate burden” language creates an unclear standard (and will likely generate extensive litigation)
The problem discussed above extends beyond the specific laws that Senator Blackburn wishes to protect. Previously, “generally applicable” laws were exempted. Under the new language, laws that address AI models/systems or “automated decision systems” can be exempted, but only if they do not place an “undue or disproportionate burden” on said models/systems. The effect of the new “undue or disproportionate burden” language will likely be to generate additional litigation and uncertainty. It may also make it more likely that some generally applicable laws, such as facial recognition laws or data protection laws, will no longer be exempt because they may place a disproportionate burden on AI models/systems.
Other less significant changes
Previously, subsection (q)(2)(A)(ii) excepted any law or regulation “the primary purpose and effect of which is to… streamline licensing, permitting, routing, zoning, procurement, or reporting procedures in a manner that facilitates the adoption of [AI models/systems/automated decision systems].” As amended, the relevant provision now excepts any law or regulation “the primary purpose and effect of which is to… streamline licensing, permitting, routing, zoning, procurement, or reporting procedures related to the adoption or deployment of [AI models/systems/automated decision systems].” This amended language is slightly broader than the original, but the difference does not seem highly significant.
Additionally, the structure of the paragraphs has been adjusted slightly, likely to make clear that subparagraph (B) (which requires that any fee or bond imposed by any excepted law be reasonable and cost-based and treat AI models/systems in the same manner as other models/systems) modifies both the “generally applicable law” and “primary purpose and effect” prongs of the rule of construction rather than just one or the other.
Other issues remain
As we’ve discussed previously, our best read of the text suggests that two additional issues remain unaddressed:
- Any state that takes any of the newly appropriated $500 million in BEAD funding runs the risk of having its entire share of the previously obligated $42.45 billion in existing BEAD funding clawed back for violations of the moratorium.
- Private companies and individuals will likely be able to enforce the moratorium through litigation.
The AI moratorium—more deobligation issues
Earlier this week, LawAI published a brief commentary discussing how to interpret the provisions in the proposed federal moratorium on state laws regulating AI relating to deobligation of Broadband Equity, Access, and Deployment (BEAD) funds. Since that publication, the text of the proposed moratorium has been updated, apparently in order to comply with a request from the Senate parliamentarian. Given the importance of this issue, and the existence of some amount of confusion around what exactly the changes to the moratorium’s text do, we’ve decided to publish a sequel to that earlier commentary briefly explaining how this new version of the bill will impact existing BEAD funding.
Does the latest version of the moratorium affect existing BEAD funding or only the new $500 million?
The moratorium would still, potentially, affect both existing and newly appropriated BEAD funding.
Essentially, there are two tranches of money at issue here: $500 million in new BEAD funding that the reconciliation bill would appropriate, and the $42.45 billion in existing BEAD funding that has already been obligated to states (but none of which has actually been spent as of the writing of this commentary). The previous version of the moratorium, as we noted in our earlier commentary, contained a deobligation provision that would have allowed deobligation (i.e., clawing-back) of a state’s entire portion of the $42.45 billion tranch as well as the same state’s portion of the new $500 million tranch.
The new version of the moratorium would update that deobligation provision by adding the words “if obligated any funds made available under subsection (b)(5)(A)” to the beginning of 47 U.S.C. § 1702(g)(3)(B)(iii). The provision now reads, in relevant part, “The Assistant Secretary… may, in addition to other authority under applicable law, deobligate grant funds awarded to an eligible entity that… if obligated any funds made available under subsection (b)(5)(A), is not in compliance with [the AI moratorium].”
In other words, the update clarifies that only states that accept a portion of the new $500 million in BEAD funding can have their BEAD funding clawed back if they attempt to enforce state laws regulating AI. But it does not change the fact that any state that does accept a portion of the $500 million, and then violates the moratorium (intentionally or otherwise), is subject to having all of its BEAD funding clawed back—including its portion of the $42.45 billion tranch of existing BEAD funding. Paragraph (3) covers “deobligation of awards” generally, and the phrase “grant funds awarded to an eligible entity” clearly means all grant funds awarded to that entity, rather than just funds made available under subsection (b)(5)(A) (i.e., the new $500 million). This is clear because subsections (g)(3)(B)(i) and (g)(3)(B)(ii), which allow deobligation if a state e.g. “demonstrates an insufficient level of performance, or wasteful or fraudulent spending,” clearly allow for deobligation of all of a state’s BEAD funding rather than just the new $500 million tranch.
So what has changed?
The most significant consequence of the update to the deobligation provision is that any state that does not accept any of the new $500 million appropriation is now clearly not subject to having existing BEAD funds clawed back for noncompliance with the moratorium. As we noted in our previous commentary, the previous text would have required compliance with the moratorium if Commerce deobligated existing BEAD funds for e.g. “wasteful or fraudulent spending” and then re-obligated them. That would not be possible under the new text.
In other words, states would clearly be able to opt out of compliance with the moratorium by choosing not to accept their share of the newly appropriated BEAD money. As other authors have noted, this would mean that wealthy states with a strong appetite for AI regulation, like New York and California, could pass on the new funding and continue to enact and enforce AI laws while less wealthy and more rural states might accept the additional BEAD funding in exchange for ceasing to regulate. And if technological progress and the emergence of new risks from AI caused any states that originally accepted their share of the $500 million to later change course and begin to regulate, they could potentially have all of their previously obligated BEAD funding clawed back.
The AI Moratorium—deobligation issues, BEAD funding, and independent enforcement
There’s been a great deal of discussion in recent weeks about the controversial proposed federal moratorium on state laws regulating AI. The most recent development is that the moratorium has been amended to form a part of the Broadband Equity, Access, and Deployment (BEAD) program. The latest draft of the moratorium, which recently received the go-ahead from the Senate Parliamentarian, appropriates an additional $500 million in BEAD funding, to be obligated to states that comply with the moratorium’s requirement not to enforce laws regulating AI models, systems, or “automated decision systems.” This commentary discusses two pressing legal questions that have been raised about the new moratorium language—whether it affects the previously obligated $42.45 billion in BEAD funding in addition to the $500 million in new funding, and whether private parties will be able to sue to enforce the moratorium.
Does the Moratorium affect existing BEAD funding, or only the new $500M?
One issue that has caused some confusion among commentators and policymakers is precisely how compliance or noncompliance with the moratorium will affect states’ ability to keep and spend the $42.45 billion in BEAD funding that has previously been obligated.
It is true that subsection (p) specifies that only amounts made available “On and after the date of enactment of this subsection” (in other words, the new $500m appropriation and any future appropriations) depend on compliance with the moratorium. However, the moratorium would also add a new provision to subsection (g), which covers “deobligation of awards.” This new provision states that Commerce may deobligate (i.e. withdraw) “grant funds awarded to an eligible entity that… is not in compliance with subsection (q) or (r).” This deobligation provision clearly and unambiguously applies to all $42.45 billion in previously obligated BEAD funding, in addition to the new $500 million. Subsection (g) amends the existing BEAD deobligation rules, not just the moratorium. And while subsections (p) and (q) affect only states that accept new obligations “on or after the enactment” of the bill, subsection (g) applies to all “grant funds” with no limitation on the funds source or timing.
So, any state that is not in compliance with subsection (q)—which includes any state that accepts any portion of the newly appropriated $500m and is later determined to have violated the moratorium, even unintentionally—could face having all of its previously obligated BEAD funding clawed back by Commerce, rather than just its portion of the new $500 million appropriation.
Additionally, it is possible that even states that choose not to accept any of the new $500 million could be affected, if Commerce deobligates previously obligated funds for reasons such as “an insufficient level of performance, or wasteful or fraudulent spending.” If this occurred, then any re-obligation of the clawed-back funds would require compliance with the moratorium. In other words, Commerce could attempt to use a state’s entire portion of the $42.45 billion BEAD funding amount as a cudgel to coerce states into complying with the moratorium and agreeing not to regulate AI models, systems, or “automated decision systems.”
Can private parties enforce the moratorium?
Probably. Various commentators have argued that the moratorium cannot be enforced by private parties, or that the Secretary of Commerce will, in his discretion, determine how vigorously the moratorium will be enforced. But the plain text of the provision, and applicable legal precedents, indicate that private parties will likely be entitled to enforce the prohibition on state AI regulation as well.
Stripping the provision down to its essentials, subsection (q) states that “no eligible entity or political subdivision thereof . . . may enforce . . . any law or regulation . . . limiting, restricting or otherwise regulating artificial intelligence models, [etc.].” That is a clear prohibition. It doesn’t mention the Department of Commerce. Nor does it leave it to the Secretary’s discretion whether that prohibition applies. If states satisfy the criteria, they are prohibited from enforcing laws restricting AI.
Nothing in the proposed moratorium or in 47 U.S.C. § 1702 generally provides that the only remedy for a violation of the moratorium is deobligation of obligated funds by the Assistant Secretary of Commerce for Communications and Information. And when comparable laws—e.g. the Airline Deregulation Act, 49 U.S.C. § 41713—have used similar language to expressly preempt state AI laws, courts have interpreted this as authorizing private parties to sue for an injunction preventing enforcement of preempted state laws. See, for example, Morales v. Trans World Airlines, Inc., 504 U.S. 374 (1992).
LawAI’s comments on the Draft Report of the Joint California Policy Working Group on AI Frontier Models
At Governor Gavin Newsom’s request, a joint working group released a draft report on March 18, 2025 setting out a framework for frontier AI policy in California. Several of the staff at the Institute for Law & AI submitted comments on the draft report as it relates to their existing research. Read their comments below:
These comments were submitted to the Working Group as feedback on April 8, 2025. The opinions expressed in these comments are those of the authors and do not reflect the views of the Institute for Law & AI.
Liability and Insurance Comments
by Gabriel Weil and Mackenzie Arnold
Key Takeaways
- Insurance is a complement to, not a replacement for, clear tort liability.
- Correctly scoped, liability is compatible with innovation and well-suited to conditions of uncertainty.
- Safe harbors that limit background tort liability are a risky bet when we are uncertain about the magnitude of AI risks and have yet to identify robust mitigations.
Whistleblower Protections Comments
by Charlie Bullock and Mackenzie Arnold
Key Takeaways
- Whistleblowers should be protected for disclosing information about risks to public safety, even if no law, regulation, or company policy is violated.
- California’s existing whistleblower law already protects disclosures about companies that break the law; subsequent legislation should focus on other improvements.
- Establishing a clear reporting process or hotline will enhance the effectiveness of whistleblower protections and ensure that reports are put to good use.
Scoping and Definitions Comments
by Mackenzie Arnold and Sarah Bernardo
Key Takeaways
- Ensuring that a capable entity regularly updates what models are covered by a policy is a critical design consideration that future-proofs policies.
- Promising techniques to support updating include legislative purpose clauses, periodic reviews, designating a capable updater, and providing that updater with the information and expertise needed to do the job.
- Compute thresholds are an effective tool to right-size AI policy, but they should be paired with other tools like carve-outs, tiered requirements, multiple definitions, and exemptions to be most effective.
- Compute thresholds are an excellent initial filter to determine what models are in scope, and capabilities evaluations are a particularly promising complement.
- In choosing a definition of covered models, policymakers should consider how well the definitional elements are risk-tracking, resilient to circumvention, clear, and flexible—in addition to other factors discussed in the Report.
Draft Report of the Joint California Policy Working Group on AI Frontier Models—scoping and definitions comments
These comments on the Draft Report of the Joint California Policy Working Group on AI Frontier Models were submitted to the Working Group as feedback on April 8, 2025. The opinions expressed in these comments are those of the authors and do not reflect the views of the Institute for Law & AI.
Commendations
1. The Report correctly identifies that AI models and their risks vary significantly and thus merit different policies with different inclusion criteria.
Not all AI policies are made alike. Those that target algorithmic discrimination, for example, concern a meaningfully different subset of systems, actors, and tradeoffs than a policy that targets cybersecurity threats. What’s more, the market forces affecting these different policies vary considerably. For example, one might be far more concerned about limiting innovation in a policy context where many small startups are attempting to integrate AI into novel, high-liability-risk contexts (e.g., healthcare) and less concerned in contexts that involve a few large actors receiving large, stable investments, where the rate of tort litigation is much lower absent grievous harms (e.g., frontier model development). That’s all to say: It makes sense to foreground the need to scope AI policies according to the unique issue at hand.
2. We agree that at least some policies should squarely address foundation models as a distinct category.
Foundation models, in particular those that present the most advanced or novel capabilities in critical domains, present unique challenges that merit separate treatment. These differences emerge from the unique characteristics of the models themselves, not their creators (who vary considerably) or their users. And the potential benefits and risks that foundation models present cut across clean sectoral categories.
3. We agree that thresholds are a useful and necessary tool for tailoring laws and regulations (even if they are imperfect).
Thresholds are easy targets for criticism. After all, there is something inherently arbitrary about setting a speed limit at 65 miles per hour rather than 66. Characteristics are more often continuous than binary, so typically there isn’t a clear category shift after you cross over some talismanic number. But this issue isn’t unique to AI policy, and in every other context, government goes on nonetheless. As the Report notes, policy should be proportional in its effects and appropriately narrow in its application. Thresholds help make that possible.
4. The Report correctly acknowledges the need to update thresholds and definitional criteria over time.
We agree that specific threshold values and related definitional criteria will likely need to be updated to keep up with technological advances. Discrete, quantitative thresholds are particularly at risk of becoming obsolete. For instance, thresholds based on training compute may become obsolete due to a variety of AI developments, including improvements in compute and algorithmic efficiency, techniques such as distillation, and/or the growing impact of inference scaling. Given the competing truths that setting some threshold is necessary and that any threshold will inevitably become obsolete, ensuring that definitions can be quickly, regularly, and easily updated should be a core design consideration.
5. We agree that, at present, compute thresholds (combined with other metrics and/or thresholds) are preferable to developer-level thresholds.
Ultimately, the goal of a threshold is to set a clear, measurable, and verifiable bar that correlates with the risk or benefit the policy attempts to address. In this case, a compute threshold best satisfies those criteria—even if it is imperfect. For more discussion, see Training Compute Thresholds: Features and Functions in AI Regulation and The Role of Compute Thresholds for AI Governance.
Recommendations
1. The Report should further emphasize the centrality of updating thresholds and definitional criteria.
Updating is perhaps the most important element of an AI policy. Without it, the entire law may in short time cease to cover the conduct or systems policymakers aimed to target. We should expect this to happen by default. The error may be one of overinclusion—for example, large systems may present few or manageable risks even after a compute threshold is crossed. After some time, we will be confident that these systems do not merit special government attention and will want to remove obligations that attach to them. The error may be one of underinclusion—for example, improvements in compute or algorithmic efficiency, techniques such as distillation, and/or the growing impact of inference scaling may mean that models below the threshold merit inclusion. The error may be in both directions—a truly unfortunate, but entirely plausible, result. Either way, updating will be necessary for policy to remain effective.
We raise this point because without key champions, updating mechanisms will likely be left out of California AI legislation—leading to predictable policy failures. While updating has been incorporated into many laws and regulations, it was notably absent from the final draft of SB 1047 (save for an adjustment for inflation). A similar result cannot befall future bills if they are to remain effective long-term. A clear statement by the authors of the Report would go a long way toward making updating feasible in future legislation.
Recommendation: The Report should clearly state that updating is necessary for effective AI policy and explain why policy is likely to become ineffective if updating is not included. It should further point to best practices (discussed below) to address common concerns about updating.
2. The Report should highlight key barriers to effective updating and tools to manage those barriers.
Three major barriers stand in the way of effective updating. First is the concern that updating may lead to large or unpredictable changes, creating uncertainty or surprise and making it more difficult for companies to engage in long-term planning or fulfill their compliance obligations. Second, some (understandably) worry that overly broad grants of discretion to agencies to update the scope of regulation will lead to future overreach, extending powers to contexts far beyond what was originally intended by legislators. Third, state agencies may lack sufficient capacity or knowledge to effectively update definitions.
The good news: These concerns can be addressed. Establishing predictable periodic reviews, requiring specific procedures for updates, and ensuring consistent timelines can limit uncertainty. Designating a competent updater and supplying them with the resources, data, and expert consultation they need can address concerns about agency competency. And constraining the option space of future updates can limit both surprise and the risk of overreach. When legislators are worried about agency overreach, their concern is typically that the law will be altered to extend to an unexpected context far beyond what the original drafters intended—for example, using a law focused on extreme risks to regulate mundane online chatbots or in a way that increases the number of regulated models by several orders of magnitude. To combat this worry, legislators can include a purpose clause that directly states the intended scope of the law and the boundaries of future updates. For example, a purpose clause could specify that future updates extend “only to those models that represent the most advanced models to date in at least one domain or materially and substantially increase the risk of harm X.” Purpose clauses can also come in the imperative or negative. For example, “in updating the definition in Section X, Regulator Y should aim to adjust the scope of coverage to exclude models that Regulator Y confidently believes pose little or no material risk to public health and safety.”
Recommendation: The Report should highlight the need to address the risks of uncertainty, agency overreach, and insufficient agency capacity when updating the scope of legislation. It should further highlight useful techniques to manage these issues, namely, (a) including purpose clauses or limitations in the relevant definitions, (b) specifying the data, criteria, and public input to be considered in updating definitions, (c) establishing periodic reviews with predictable frequencies, specific procedures, and consistent timelines, (d) designating a competent updater that has adequate access to expertise in making their determinations, (e) ensuring sufficient capacity to carry out periodic reviews and quickly make updates outside of such reviews when necessary, and (f) providing adequate notice and opportunity for input.
3. The Report should highlight other tools beyond thresholds to narrow the scope of regulations and laws—namely, carve-outs, tiered requirements, multiple definitions, and exemption processes.
Thresholds are not the only option for narrowing the scope of a law or regulation, and highlighting other options increases the odds that a consensus will emerge. Too often, debates around the scope of AI policy get caught on whether a certain threshold is overly burdensome for a particular class of actor. But adjusting the threshold itself is often not the most effective way to limit these spillover effects. The tools below are strong complements to the recommendations currently made in the Report.
By carve-outs, we mean a full statutory exclusion from coverage (at least for purposes of these comments). Common carve-outs to consider include:
- Small businesses
- Startups in particularly fragile funding ecosystems, onerous regulatory environments, or high-upside sectors that merit regulatory favoritism on innovation grounds
- Open-source model developers or hosts with the caveats noted below
- Providers of high-volume, low-cost services that could not feasibly exist with additional regulatory costs due to their volume or margins (e.g., some chat bots)
- Social service providers or governments who provide a socially valuable service at low or no cost, especially where we expect that these actors may under-adopt useful technology due to other frictions
This is not to say that these categories should always be exempt, but rather that making explicit carve-outs for these categories will often ease tensions over specific thresholds. In particular, it is worth noting that while current open-source systems are clearly net-positive according to any reasonable cost-benefit calculus, future advances could plausibly merit some regulatory oversight. For this reason, any carve-out for open-source systems should be capable of being updated if and when that balance changes, perhaps with a heightened evidentiary burden for beginning to include such systems. For example, open-source systems might be generally exempt, but a restriction may be imposed upon a showing that the open-source systems materially increase marginal risk in a specific category, that other less onerous restrictions do not adequately limit this risk, and that the restriction is narrowly tailored.
Related, but less binary, is the use of tiered requirements that impose only a subset of requirements or weaker requirements on these favored models or entities, such as, requiring certain reporting requirements of smaller entities while not requiring them to perform the same evaluations. For this reason, more legislation should likely include multiple or separate definitions of covered models to enable a more nimble, select-only-those-that-apply approach to requirements.
Another option is to create exemption processes whereby entities can be relieved of their obligations if certain criteria are met. For example, a model might be exempt from certain requirements if it has not, after months of deployment, materially contributed to a specific risk category or if the model has fallen out of use. Unlike the former two options, these exemption processes can be tailored to case-by-case fact patterns and occur long after the legislative or regulatory process. They may also better handle harder-to-pin-down factors like whether a model creates exceptional risk. These exemption processes can vary in a few key respects, namely:
- Evidentiary: Presumptive or requiring a showing of evidence
- Decision maker: Self-attested, certified by a third party, or approved by a regulator
- Duration: Permanent or temporary
- Rigidity: Formulaic or factor-based with flexible considerations
- Speed: Automatic or requiring action or review
Recommendation: The Report already mentions that exempting small businesses from regulations will sometimes be desirable. It should build on this suggestion by emphasizing the utility of carve-outs, tiered requirements, multiple definitions, and exemption processes (in addition to thresholds) to further refine the category of regulated models. It should also outline some of the common carve-out categories (noting the value of maintaining option value by ensuring that carve-outs for open-source systems are revised and updated if the cost-benefit balance changes in the future) as well as key considerations in creating exemption processes.
4. We recommend that the Report elaborate on the approach of combining different types of thresholds by discussing the complementary pairing of compute and capabilities thresholds.
It is important to provide additional detail about other metrics that could be combined with compute thresholds because this approach is promising and one of the most actionable items in the Report. We recommend capabilities thresholds as a complement to compute thresholds in order to leverage the advantages of compute that make it an excellent initial filter, while making up for its limitations with evaluations of capabilities, which are better proxies for risk and more future-proof. Other metrics could also be paired with compute thresholds in order to more closely track the desired policy outcome, such as risk thresholds or impact-level properties; however, they have practical issues, as discussed in the Report.
Recommendation: The Report should expand on its suggestion that compute thresholds be combined with other metrics and thresholds by noting that capabilities evaluations may be a particularly promising complement to compute thresholds, as they more closely correspond to risk and are more adaptable to future developments and deployment in different contexts. Other metrics could also be paired with compute thresholds in order to more closely track the desired policy outcome, such as risk evaluations or impact-level properties.
5. The Report should note additional definitional considerations in the list in Section 5.1—namely, risk-tracking, resilience to circumvention, clarity, and flexibility.
The Report correctly highlights three considerations that influence threshold design: determination time, measurability, and external verifiability.
Recommendation: We recommend that the Report note four additional definitional considerations, namely:
- Risk-Tracking: How closely is the proxy correlated with the risks a policy looks to manage? Currently, compute correlates strongly with advanced capabilities. While there are some exceptions amongst specialized models, bigger is generally better. This remains true even after meaningful gains in inference scaling; it is true both that more inference compute leads to better results and that for any fixed amount of inference compute, a model with more training compute tends to perform better. Generally, the most compute-intensive models are the most likely to be deployed widely in new contexts and the most likely to exhibit emergent capabilities that pose unique risks. Compute is less correlated with risk than more direct measures like capabilities or risk itself, but both of these proxies are harder to measure and define.
- Resilience to Circumvention: How difficult is it to game the proxy or evade its application? Thresholds that are more difficult to circumvent are more effective, while easily circumvented thresholds risk becoming useless once a few actors demonstrate the ease of circumvention. Training compute is a difficult proxy to circumvent. While a threshold that focuses solely on training compute could miss models that rely heavily on inference, training compute is still a significant contributor to the capabilities of a model. Derivative models and distillations pose a meaningful obstacle here, as policymakers must decide what and how to cover models with similar performance but different compute inputs. Generally speaking, requirements that lead to paperwork redundancies for similar models can likely be collapsed so that only one model is governed, while rules that relate to preventing or governing specific uses or risks may need to extend to derivatives and distillations to avoid becoming ineffective.
- Clarity: How certainly can a regulated party predict that they will be affected by regulation? And how quickly and clearly can regulators clarify ambiguities through interpretations and guidances? Compute thresholds are clear relative to more subjective alternatives. While there are some open questions regarding who measures and how to measure compute, order-of-magnitude differences in compute usage will typically allow actors to know whether they fall in or out of scope of a regulation.
- Flexibility: Will the proxy remain accurate over time—because it remains the same, naturally adjusts, or allows for easy updating? Compute is less naturally adaptable than risk-based or capabilities-based thresholds.
For more discussion, see Training Compute Thresholds: Features and Functions in AI Regulation and The Role of Compute Thresholds for AI Governance.