US Tech Force: Why It Faces Major Challenges—and How It Can Succeed Anyway
Summary
- The US Tech Force is a new effort to recruit technical experts to accelerate AI adoption in the executive branch.
- Led by the Office of Personnel Management (OPM), it ambitiously aims to hire 1,000 fellows annually, starting this year.
- There are major benefits to this kind of program, but it will face steep challenges.
- It can use a special authority to hire quickly, but standard practices will likely cause delays, such as the notoriously slow security clearance process.
- While OPM will recruit fellows, executive branch agencies will need to hire, onboard, and fund them—potentially creating friction and inefficiencies.
- Key interventions can increase the odds of the Tech Force’s success.
- Agencies can use interim security clearances to avoid months-long delays.
- Congress could provide specific funding for agencies to hire fellows.
- OPM can use its coordinating role to match fellows to agencies, rather than letting them compete for top candidates.
- The executive branch can use non-competitive eligibility and personnel exchanges with the private sector to increase the government’s return on investment.
On December 15, OPM announced a new program called the US Tech Force. The Tech Force is billed as a “cross-government program to recruit top technologists to modernize the federal government.” It intends to take on “the most complex and large-scale civic and defense challenges of our era,” running the gamut from “administering critical financial infrastructure at the Treasury Department to advancing cutting-edge programs at the Department of Defense.”
Through the program, the government plans to hire approximately 1,000 fellows each year who are highly skilled in software engineering, AI, cybersecurity, data analytics, or technical project management, to serve for one- to two-year terms. The Tech Force aims to foster early-career talent in particular, a demographic that the federal government has long struggled to recruit in sufficient numbers. To support the program, the government is partnering with private-sector companies, which will provide technical training and recruitment.
The sheer scale envisioned for the Tech Force makes it noteworthy. Compare it to the previous administration’s AI Talent Surge initiative, which hired around 250 people between October 2023-24. The U.S. Digital Corps, another similar program, had only 70 fellows in its most recent cohort in 2024. The timeline that OPM outlines for the Tech Force launch is also very ambitious: an initial pilot wave of fellows by Spring 2026, followed by the start of the first on-cycle cohort by September 2026.
While the Tech Force has huge potential, it will need to overcome the challenges inherent to rapid, large-scale talent acquisition in the federal government—and make sure that the government recoups the value of this significant investment.
Using a streamlined process and private partnership to meet the ambitious goals
Per a memo from OPM to agency heads sent the same day as the Tech Force announcement, Tech Force fellows will be hired as “Schedule A” federal employees. Schedule A is an “excepted-service” hiring authority. That means that fellows can be hired using streamlined procedures that generally shorten the hiring timeline, which otherwise runs about 100 days for the default “competitive-service” hiring used to fill most rank-and-file government positions. The difference is significant given that the Tech Force application only just closed on February 2, and at least one report says that the program is targeting start dates in March.
Notwithstanding the expedited Schedule A hiring process, OPM Director Scott Kupor has said that Tech Force fellows will still go through the normal channels for obtaining security clearances, though he noted that agencies have assured him that they will process fellows’ clearances—which typically take months or more—as efficiently as possible.
The Tech Force also involves public-private collaboration. So far, roughly thirty companies have agreed to partner with the government to support the program, including Amazon Web Services, Apple, Google Public Sector, Meta, Microsoft, Nvidia, OpenAI, Oracle, Palantir, and xAI. Per the Tech Force website, these companies can provide support in various ways, such as offering technical training resources and mentorship, nominating employees for participation in the program, and committing to considering Tech Force alums for employment after their government service has ended.
While it’s not apparent what form “technical training resources” will take, they could be quite valuable, coming from companies at the bleeding edge of AI and other critical technology. The government might therefore consider working with companies to extend such resources to other technical employees within the federal government, beyond just the Tech Force teams. This could help diffuse the knowledge and experience gains of the Tech Force program to more federal employees, magnifying the program’s overall impact.
OPM’s complicated role leading the initiative
OPM appears to have primary responsibility for the Tech Force, at least in terms of overall program administration and coordination. The Office of Management and Budget (OMB) and the General Services Administration (GSA)—which, like OPM, focus on governmentwide operations and resources—are also listed in OPM’s announcement as key players. The Tech Force’s website emphasizes that the program has the White House’s backing, with the OPM announcement specifying the involvement of the Office of Science and Technology Policy (OSTP). OSTP has had a prominent role in shaping the Administration’s AI policy, most notably the AI Action Plan released last July.
In the memo sent to agency heads, OPM explained that it will provide centralized oversight and administration for the program, including managing outreach, recruitment, and assessment of the fellows. However, individual agencies will be responsible for hiring, onboarding, and funding Tech Force fellows, with projects and assignments set by agency leadership. OPM also instructs that Tech Force teams will report directly to agency heads (or their designees).
Beyond such statements, OPM has yet to publicly outline how exactly its centralized recruitment and assessment of applicants will intersect with agencies’ responsibility for hiring fellows. Looking to other initiatives helps to show the different structures that are possible for these sorts of programs, as well as their advantages and disadvantages.
First, there’s the U.S. Digital Corps, similarly centered on short-term tech-focused appointments for early-career talent, though much smaller than the Tech Force in terms of size. In that program, the GSA served a coordinating role by pairing candidates with partnering agencies, with input from both applicants and agencies to gauge preferences and fit. While participants were formally hired by GSA, they were “detailed,” i.e., sent on assignment, to their partnering agencies for the duration of the fellowship. Contrast that with the much-larger Presidential Management Fellows (PMF) program, focused on early-career talent across a variety of disciplines. There, OPM provided centralized vetting and selected a slate of finalists, though agencies ultimately decided which if any finalists to interview and hire. For context, OPM selected 825 finalists for the PMF class of 2024.
Based on the OPM memo and other materials, the Tech Force seems closer in its intended structure and scale to the PMF program than the U.S. Digital Corps. But if that’s indeed the case, it’s worth noting some potential pitfalls of that model of which Tech Force leadership should remain aware. Notably, in the ten years before it was discontinued in 2025, on average 50% of PMF finalists did not obtain federal positions. Among other things, the fact that agencies had to pay an $8,000 premium to OPM for every PMF finalist they hired seems to have functioned as a disincentive, and the uncertainty caused by agencies’ long hiring timelines may have prompted finalists to pursue other career opportunities.
As the Federation of American Scientists suggested in the context of potential PMF reform, OPM should focus on creating a strong support ecosystem for the Tech Force to counteract these issues, including by strengthening key partnerships in agencies. Specifically, establishing dedicated and high-ranking “Tech Force Director” positions within agencies could foster a closer fit between agencies’ needs and the benefits the program can offer while continuing to share the administrative load of the program more broadly.
The goal: accelerating the government’s adoption of AI and attracting early-career talent
According to OPM, the purposes of the Tech Force are numerous. First and foremost, it aims to accelerate the government’s adoption of AI and other emerging technologies, including by deploying teams of technologists to various agencies to work on high-impact projects. That strong focus on AI is consistent with the involvement of OSTP, which has led on AI policy.
Judging from its website and other publicly available materials, the Tech Force appears to be focused to a considerable degree on modernizing the federal government’s aging digital systems, perhaps more so than the work on evaluating the capabilities and risks of frontier AI models that offices like the Commerce Department’s Center for AI Standards and Innovation (CAISI) do. Among the types of projects participants will work on, the Tech Force lists AI implementation, application development, data modernization, and digital service delivery.
Even though AI evaluation and monitoring work isn’t explicitly on the list, it fits well within the Tech Force’s large anticipated scale and its broad aim to build “the future of American government technology.” Such work improves the security of AI systems, both for commercial uses and when deployed throughout the federal government. Moreover, expressly including AI evaluation and monitoring work within the scope of the program might bolster recruitment of top talent given its intersection with high-profile national security work—interesting and valuable experience that tech experts typically can’t get outside of the government.
On the subject of recruitment, publicly available materials like the Tech Force website and OPM memo convey a focus on early-career talent, to “[h]elp address the early-stage career gaps in government.” As OPM Director Kupor noted on the heels of the Tech Force announcement, the federal government has long trailed the private sector in attracting and hiring junior talent, with only 7% of the federal workforce under the age of thirty. Kupor frames the Tech Force as part of OPM’s broader effort to “Make Government Cool Again,” infusing it with “newer ideas and newer experiences” to keep pace with rapid technological change.
The Tech Force also plans to employ “experienced engineering managers.” Those managers will lead and mentor teams comprised largely of early-career talent. While the Tech Force will primarily seek early-career talent via traditional recruiting channels, it appears that managers will be drawn mostly or perhaps even exclusively from the program’s partnering companies. OPM thus notes that the program will serve the purpose of providing mid-career technology specialists with an opportunity to gain government experience without necessitating a permanent transition.
Two major challenges for the Tech Force—and how to tackle them
1. Getting the Tech Force set up quickly
OPM is aiming for an initial pilot wave of fellows by the spring (with one report specifying that it’s targeting start dates by March), and for the first full cohort of 1,000 fellows by September. That schedule is possible, though it means that agencies will have to significantly improve upon the government’s average hiring and clearance timelines, which generally take several months, if not longer. There are several ways to do that.
For context, the special “Schedule A” authority that will be used to hire Tech Force fellows can theoretically be deployed very quickly, because it doesn’t require time-consuming procedures like rating and ranking of applicants that regular hiring entails. Though OPM plans to have applicants undergo a technical assessment and potentially interviews with agency leadership, those steps might conceivably add only a few weeks, or even less time if well-staffed.
As for security clearances, there’s similarly no legal barrier to them moving quickly—for example, they have sometimes been issued in a matter of weeks or even days for political appointees, such as those needed for crisis-response efforts and other urgent matters. However, the clearance timeline for the average new hire runs anywhere from two to six months or more, depending on the level of clearance needed and whether the case presents any complications, like foreign business ties, necessitating further investigation.
OPM Director Kupor has said that agencies have promised to process Tech Force fellows’ security clearances as quickly as possible. Indeed, timelines for clearances can shrink from months to days when personnel know that a particular matter is a top priority for the head of their agency or the White House, as documented by Raj Shah and Christopher Kirchhoff in their 2024 book Unit X, about the Pentagon’s elite Defense Innovation Unit.
To this end, agencies could make use of “interim” security clearances for fellows, which would allow them to begin work pending a final clearance decision in cases that don’t raise concerns upon initial review. Interim clearances can shave months off the timeline, yet agencies appear to use them unevenly, perhaps overestimating the risk of a negative final clearance decision. But if agencies are to meet the Tech Force’s ambitious goals (especially for a pilot wave of fellows as early as March), then they need to consider utilizing this tool—and personnel within the organization need to know that they have cover from their leadership in using it.
Still, other operational challenges and questions loom. While pressure from the top can accelerate hiring and clearance timelines, the agency teams tasked with fulfilling such mandates may find it difficult to maintain the pace over longer periods if they’re inadequately resourced and staffed. Furthermore, OPM has made it clear that agencies will fund Tech Force fellows and projects themselves, leaving the overall financial footing of the program unclear, and potentially delaying its actual launch at any agencies that might struggle to find available funds on relatively short notice.
Congress could bolster the Tech Force by appropriating funds to specifically support it, to include project budgets, fellows’ salaries, and the other costs associated with hiring and clearing fellows rapidly and at scale. Without dedicated appropriations, agencies may vary widely in the amount of discretionary funding that they’re able or willing to devote to the program, especially at the outset of this new and previously unaccounted-for expenditure. Congress could also pass measures aimed at increasing the ability of federal hiring teams to assess AI talent, like those in the bipartisan “AI Talent” bill introduced on December 10. Building up this type of AI-enabling talent would help agencies work efficiently in selecting and hiring the right technical expertise, for both the Tech Force and other similar hiring efforts.
Additionally, given OPM’s statements that hiring of Tech Force fellows will be conducted directly by agencies, it remains to be seen how exactly OPM will ensure that agencies don’t waste resources—including valuable time—competing over the same candidates. Within the private sector, competition for AI talent is fierce, and that dynamic seems likely to affect the Tech Force as well. The Tech Force job vacancies posted so far note that they’re “shared announcements” from which various agencies may hire, and suggest that it’ll be up to the agencies to decide which candidates to interview and make offers to. OPM should play a robust coordinating role here so that smaller or less well-known agencies aren’t disadvantaged in attracting and securing sufficient qualified hires. Based on the scale of the program, OPM might even consider a model like the “medical match” system used to pair medical students with residency programs, to help both applicants and agencies weigh their options and needs in a more efficient and organized manner.
Finally, the public-private structure of the program could present some challenges. As full-time federal employees, Tech Force fellows will be subject to the generally applicable conflict of interest rules, which prohibit government employees from receiving outside compensation, or from accessing information or taking action that could unduly benefit themselves or closely related parties financially. Given such rules, the Tech Force website notes that participants nominated by partner companies are expected to take unpaid leave or to separate from their private-sector employers while working for the government. Even still, conflict of interest rules can complicate federal service for tech-company employees, who often have deferred compensation packages (e.g., restricted stock units, options) that vest over time, perhaps several years in the future.
The Tech Force website says that it “expect[s]” that fellows, including those nominated for the program by partnering companies, will be able to retain any deferred compensation packages, though it mentions that companies will need to review details on a case-by-case basis to determine whether any such compensation must be suspended while an individual remains in the program. It bears noting that agencies’ ethics offices may also have to review potential conflicts on a case-by-case basis as they arise, with an eye to the details of the particular financial interest and government matter at issue. Because of the fact-specific nature of that analysis, it’s hard to generalize the result, and the answers could morph over time as projects and financial situations change during the course of government service.
Without greater clarity on whether and how the government can consistently address the challenges raised by conflict of interest rules, the Tech Force may struggle to recruit and retain some promising candidates. This issue is perhaps most significant for the senior engineering managers (with the most compensation on the line) that the program plans to draw from private partners.
2. Ensuring the Tech Force provides long-term benefits for the government
As OPM Director Kupor acknowledges, the federal government has a problem with its early-career talent pipeline, particularly as it relates to the need for greater adoption of AI and other emerging technology. He has cast the Tech Force as part of the solution, a way to infuse the government with a new crop of tech-savvy employees who will lend their expertise to projects of national scope and, in the process, discover that federal service can indeed provide interesting and valuable experience. But it’s unclear at this point how the Tech Force—with its standard two-year service term—will translate to enduring change in the makeup of the federal workforce, or raise the overall level of technological uptake across government. To do so, program leadership could pursue two additional steps.
First, the White House could consider issuing an executive order granting “non-competitive eligibility” (NCE) to Tech Force fellows who successfully complete the program. NCE status allows individuals to be hired for competitive-service positions (which comprise the majority of federal civilian hiring) without having to compete against applicants from the general public. Thus, an individual with NCE status can be hired much more quickly than would otherwise be the case, assuming of course that they meet the qualifications of the position. For any Tech Force fellows interested in continuing their government service after they complete the program, NCE status would likely significantly streamline the process of obtaining another federal position, and ensure that the government doesn’t lose proven talent that’s eager to stay on. The Tech Force website in fact recognizes that some fellows may apply for continued federal service following the end of the program, and so granting NCE status to successful participants is in sync with its goals.
NCE is commonly granted to the alumni of federal programs like the Peace Corps, Fulbright Scholarship, and AmeriCorps VISTA, making it a natural fit for the Tech Force, not least because of its emphasis on early-career talent. NCE is typically valid for between one and three years following successful service completion, though the timeframe specified is entirely up to the White House’s discretion. Establishing a three-year NCE period for Tech Force alumni (versus one or two years) would give individuals the option of pursuing meaningful professional experiences outside of government before potentially returning for another stint. Likewise, it would give the government a broader window in which it might recoup its investment in training previous Tech Force talent.
Second, Tech Force leadership might consider expanding the program to include some opportunities for existing government employees to serve one- to two-year terms in the private sector. The Tech Force’s private-sector partners provide a potential ready-made source of such opportunities, and companies might be amenable given that they will lose some of their own employees and managers to the program for similar periods. For the federal government, making the Tech Force a two-way exchange (even in numbers much more modest than the Tech Force’s 1,000 fellows) would amplify the government’s access to knowledge and experience regarding AI and other emerging tech, beyond the Tech Force teams and projects themselves. That might be valuable insofar as the Tech Force teams could end up being quite insular given their direct reporting line to agency heads.
This sort of science-and-technology exchange program already has some precedent at federal agencies, and would increase diffusion of the Tech Force’s capacity-building benefits throughout the federal workforce. Upon their return to federal service, government employees might disseminate lessons and approaches from the private sector among their colleagues and teams. Furthermore, because the Tech Force’s team leaders will be drawn (perhaps exclusively) from private-sector partners, a two-way exchange could be a way to give existing government tech managers important experience, ultimately providing agencies with a deeper bench of mid- and senior-career talent.
xAI’s Trade Secrets Challenge and the Future of AI Transparency
xAI is challenging a California state law that took effect at the beginning of this year, requiring xAI and other generative AI developers who provide services to Californians to publicly disclose certain high-level information about the data they use to train their AI models.
According to its drafters, the law aims to increase transparency in AI companies’ training data practices, helping consumers and the broader public identify and mitigate potential risks and use cases associated with AI. Supporters of this law view it as an important step toward a more informed public. Detractors view it as innovation-stifling. Other developers, including Anthropic and OpenAI, have already released their training data summaries in compliance with the new law.
xAI challenges AB-2013 on the grounds that it would force it to disclose its proprietary trade secrets, thereby destroying their economic value, in violation of the Fifth Amendment Takings Clause. It also claims that the law constitutes compelled speech in violation of the First Amendment and is unconstitutionally vague because it does not provide sufficient detail on how to comply. In this note, I focus on the trade secrets claim.
At the core of this dispute lies a tension between the values of commercial secrecy and transparency. In other industries and contexts, this tension is a familiar one: a company develops commercially valuable information – a recipe, a special sauce, or a novel way to produce goods efficiently – that it wishes, for good reason, to keep secret from competitors; at the same time, consumers of that company’s goods or services wish to know, for good reason, the nature and risks of what they are consuming. Sometimes, there is an overlap between the secrets a company wishes to keep and the information the public wishes to know. When that happens, the law plays an important role in resolving that tension one way or the other. How it does so can be as much a political question as a legal one.
This is what AB-2013 and xAI’s challenge to it is about. The AI industry is highly competitive, and companies have a legitimate interest in protecting any hard-won competitive edge that their secret methods provide. At the same time, the public has many unanswered questions about the nature of these services, which are increasingly embedded in their lives. There are weighty principles on either side. The outcome of this dispute could shape the legal treatment of these competing interests for years to come.
What does AB-2013 ask for?
Under section 3111a of AB-2013, developers must disclose a “high-level summary” of aspects of their training data, including of:
- The sources or owners of the datasets.
- The purpose and methods of collecting, processing, and modifying datasets.
- Descriptions of their data points, including what kinds of data are being used and the scale of the datasets.
- Whether personal information or aggregate consumer information is being used for training.
- Whether datasets include third-party intellectual property, including copyright and patent materials.
- The date ranges of when the datasets were used.
The law came into effect on the 1st of January this year. It applies retroactively to datasets of models released on or after January 1, 2022.
There are a few bespoke exceptions to this bill for particular AI models, namely those used for security and integrity purposes, for the operation of aircraft, and for national security, military, or defense purposes. There are, however, no exceptions for information that constitutes a trade secret.
What is a trade secret?
The crux of xAI’s position is that complying with AB-2013 would force it to reveal its trade secrets.
Broadly, a trade secret is any information that a company has successfully kept secret from competitors, and that confers a competitive advantage because of its secrecy. In other words, it must both be a secret in fact and generate independent economic value as a result of that secrecy. Trade secrets receive protection under state and federal law, and since the US Supreme Court’s 1984 decision in Ruckelshaus v Monsanto Co., they can constitute property protected by the Fifth Amendment’s Takings Clause.
While in principle the definition of a trade secret is broad enough to encompass virtually any information that meets its criteria, it is easier to claim trade secrecy for specific ‘nuts and bolts’ information, such as particular manufacturing instructions or the specific recipe for a food product. This is because revealing those details directly enables competitors to replicate them. Conversely, claims for general and abstract information are harder to establish because they tend to give less away about a company’s internal strategies. This is relevant to AB-2013, since it requires only a “high-level summary” of the disclosure categories.
Before applying this to xAI’s claim, it is important to note that regulations restricting the scope and protection of trade secrets are not necessarily unconstitutional. Constitutional doctrine balances trade secret protection against other interests, including the state’s inherent authority to regulate its marketplace by imposing conditions on companies that wish to participate in it. In some circumstances, disclosure of trade secrets may be one such condition.
Against this backdrop, xAI brings two trade secret challenges against AB-2013:
- First, it claims that AB-2013 constitutes a “per se” taking, meaning that the government is outright appropriating its property without compensation.
- Second, it claims that AB-2013 constitutes a regulatory taking, meaning that the government is imposing unjustified conditions on its property that substantially undermine its economic value to xAI.
xAI’s first claim: AB-2013 is a per se taking
xAI’s per se takings challenge is its most aggressive and atypical. Traditionally, this type of claim applies to government actions that would assume control or possession of tangible property, for example, to build a road through a person’s land. A per se taking can also occur when regulations would totally prevent an owner from using their property.
The court will need to consider first, whether AB-2013 targets xAI’s proprietary trade secrets and second, whether the law would appropriate or otherwise eviscerate xAI’s property interest in them. To my knowledge, no one has ever successfully argued a per se taking in the context of trade secrets, and there are good reasons to think xAI will not be the first.
(a) Does AB-2013 target xAI’s proprietary trade secrets?
xAI claims that through significant research and development, it has developed novel methods for using data to train its AI models, and that the secrecy of this information is paramount to its competitive advantage. It claims that its trade secret lies in the strategies and judgments xAI makes about which datasets to use and how to use them. To demonstrate the importance of secrecy, xAI cites various security protocols and confidentiality obligations it imposes internally to protect this information from getting out.
Given that the information in question remains undisclosed, it is difficult to assess the value and status of the information that xAI is required to disclose. We can reasonably assume that xAI does indeed possess some genuinely valuable secrets about how to effectively and efficiently approach training data. Yet it is much less clear whether any such secrets are implicated by the high-level summary required by AB-2013.
For example, suppose that xAI has developed a specific novel heuristic for curating and filtering datasets that allows it to achieve a particular capability more efficiently than publicly known methods. It could still disclose the more general fact that its datasets are curated and filtered, without jeopardizing the secrecy of that particular heuristic. Likewise, perhaps a specific method for allocating datasets between pre-training and post-training constitutes a trade secret. AB-2013 does not ask what the specific allocation method is. To that end, if xAI’s disclosure were comparable in scope to those of OpenAI and Anthropic, it would be highly unusual for this degree of detail about a company to constitute a trade secret.
Yet, this is precisely what xAI must demonstrate. To constitute a per se taking, it is not enough that disclosure provides clues about underlying secrets or even that it partially reveals them. xAI must show a more direct connection between the disclosure categories and their trade secrets.
(b) Does AB-2013 appropriate xAI’s proprietary trade secrets?
If the above analysis is correct, xAI will struggle at this second stage to show that disclosure would constitute a categorical appropriation or elimination of all economically beneficial value in the relevant property. If xAI lacks a discrete property interest in the disclosable information, it is hard to envision a court finding that AB-2013 would nevertheless indirectly appropriate some other property interest.
There are a few additional issues to mention. For one, unlike in a classic per se takings claim, here the claimed property would be extinguished by the law, rather than transferred to the control or possession of another entity. This is for the simple reason that, like ordinary secrets, a trade secret ceases to exist (ceases to be a secret) if it is publicly known. Since AB-2013 would destroy any trade secrecy in the disclosable information, the application of traditional takings analysis is a bit awkward.
Further, California can argue that AB-2013 is a conditional regulation: it requires disclosure only as a condition for developers operating in the California marketplace, and developers may choose whether to do so. This makes it seem less like an outright taking by the government and more like a quid pro quo that companies may choose to engage in voluntarily.
However, this argument is considerably weaker with respect to AB-2013’s retroactive application to services provided since 2022, as companies affected by that clause cannot now choose to opt out. This raises a further question: whether these regulations were foreseeable, or whether xAI had a reasonable expectation that they would not be introduced. That question is central to the second claim advanced by xAI, and I will analyze it below.
xAI’s second claim: that AB-2013 is a regulatory taking
xAI’s second argument is more orthodox. xAI argues that, even if AB-2013 is not an outright appropriation of its trade secrets, it imposes regulations that so significantly interfere with them as to amount to a taking. This argument avoids some of the hurdles of the first: it does not require that AB-2013 completely eviscerate the claimed trade secret, and there are several precedents in which this argument has been successfully made.
To determine the constitutionality of AB-2013, the court will balance the following factors established in Penn Central:
(a) The economic damage that xAI would suffer by complying with the law;
(b) The character of the government action, including the public purpose that disclosure is intended to serve; and
(c) Whether xAI had a reasonable investment-backed expectation that it would not be required to disclose this information at the time that it developed it.
(a) Economic damage to xAI
As noted above, the present information asymmetry makes it difficult to assess the harm disclosure would cause to xAI, and there are reasons to be skeptical that a high-level summary would meaningfully disadvantage xAI. Nevertheless, let’s assume that compliance would indeed destroy something valuable to xAI. In that case, the state would need to justify this disadvantage to xAI on further grounds.
(b) The character of the government action
As noted, states have the authority to regulate their marketplaces. This gives them some scope to regulate trade secrets in the service of a legitimate public interest. The public interest in the disclosable information is therefore key to California’s defense of AB-2013.
While xAI emphasizes the disadvantages that disclosure would cause for its business, it discredits the public interest in this information. It questions why the public needs to know these details and argues that they would be largely unintelligible and uninteresting.
Despite what xAI suggests, there are reasons to be interested in disclosable information, both for direct consumers and for researchers, journalists, and other third parties who could use it to enhance public understanding. For example:
- Whether a model is trained on proprietary, personal, or aggregate consumer information can help users understand the legal and ethical implications of using such models and enable them to make educated choices among their options.
- Understanding the sources, purposes, and types of training data may help identify the biases and limitations of particular models and appropriate use cases.
- The date ranges of datasets can help identify gaps in a model’s capabilities and areas in which its responses may rely on outdated data.
- Information about training data sources can be used to assess the risk of data poisoning attacks that could cause the model to behave unsafely in certain circumstances, posing risks to both consumers and third parties.
- Information about training data can be used to assess the risk that data contamination makes model evaluations (including evaluations on which consumers rely) less reliable.
Note that even if it is not known in advance precisely why certain metadata is relevant to consumers, this is not an argument for secrecy. Some risks will only be identified once the information is made public, as when an ingredient or chemical is identified as toxic after the fact. There may be highly consequential decisions in training data that are only understood later. Given the current opacity of generative AI, it is reasonable for the public to expect greater transparency. AB-2013 is, in this sense, a precautionary regulation.
There are two considerations to note in favour of xAI here. First, although there is some public interest in the relevant information, the degree of interest may not seem as immediately apparent as in other contexts. For example, ingredient lists of food products may seem more immediately consequential to consumers. Second, it is plausible that some of the public interest could be met by a more controlled disclosure environment, such as to regulators, rather than the public at large.
(c) Did xAI have a reasonable investment-backed expectation?
A crucial element of xAI’s regulatory takings challenge is the claim that it developed the information with a reasonable expectation that the law would protect it as a trade secret. A takings challenge can make sense in such cases, since states cannot capriciously revoke title to property it previously recognized and that companies relied on – at least not without compensation.
xAI claims that it had no reason to suspect this information might become disclosable, and that doing so is contrary to a long tradition of trade secret protection in the US. It points out that the regulations came to its attention only a full calendar year after it commenced operations.
There are important counterarguments to this. First, the tradition of protection that xAI cites is in fact one of balancing the protection of commercial secrets with the public interest in being informed – xAI’s characterization of the law ignores the equally old tradition of states regulating commerce in ways that protect this public interest. In California alone, there are many laws requiring some form of disclosure, whether it concerns the chemicals in cleaning products, cookware, menstrual products, or pesticides, or the privacy policies and automatic renewal practices of digital services. Second, there is no long-standing tradition of the law protecting high-level summaries of AI training data from regulation, as this is a novel form of information in a new field of industry. At the time xAI invested in and developed this information, the regulatory regime was in its infancy, and it would not have been reasonable then to assume regulation would not follow. The regulators’ response time is reasonable.
Indeed, the reason this issue is so important now is precisely that there is a window to regulate trade secrets in a way that fosters appropriate expectations.
The broader implications of xAI’s challenge
Separately from AB-2013, other state laws are beginning to require AI models to disclose information relevant to AI. Laws such as SB 53 and the RAISE Act would require frontier AI companies to disclose mitigation strategies for catastrophic risks posed by AI.
Those particular disclosure laws are likely to be more secure against similar challenges for a few reasons. First, they target information with a more immediate and overwhelming public interest, since they are directly concerned with mitigating major loss of life and billion-dollar damage. Second, they explicitly exempt trade secrets from disclosure. As I have argued elsewhere, that creates a new set of problems.
Nevertheless, the outcome of this case could shape the future of transparency in those and other areas of AI. The outcome will help establish the expectations that are reasonable for AI developers to have when structuring their commercial strategies. Reliance on those expectations makes it difficult for regulators to change the transparency rules in the future. While trade secrets are not a trump against transparency measures, they are strongest when legal expectations are well established. Yet here, where the AI industry is new, opaque, and the public has a genuine interest in greater transparency, there is an opportunity to strike a reasonable compromise between competing interests. This makes it all the more important to find the appropriate balance between commercial secrecy and transparency in the AI industry today.
xAI’s Challenge to California’s AI Training Data Transparency Law (AB2013)
Summary
- California’s Generative Artificial Intelligence Training Data Transparency Act (AB2013) requires developers of generative AI systems made available in California to publish high-level summaries of the data used to train those systems starting on January 1, 2026.
- xAI, the developer of Grok, has filed a federal lawsuit seeking to block the statute, arguing that it compels disclosure of trade secrets in violation of the Fifth Amendment and forces speech in violation of the First Amendment.
- Although AB2013 is modest on its face and has not yet been enforced, the lawsuit is worth following: the complaint previews constitutional arguments that could be raised against future state and federal AI transparency requirements.
- Notably, unlike xAI, other major AI developers, including OpenAI and Anthropic, haven’t sued and have already posted AB2013 disclosures on their websites.
What AB2013 Requires (and What It Does Not)
AB2013 applies broadly to developers who provide generative AI systems to Californians, whether offered for free or for compensation. Covered developers must post documentation on their website describing data used to train, test, validate, or fine-tune their models. The statute requires high-level disclosures, including:
- general sources and characteristics of training data,
- how datasets relate to the system’s intended purpose,
- approximate size of the data (expressed in ranges or estimates),
- whether the data includes copyrighted or licensed material,
- whether personal or aggregate consumer information is involved, and
- whether synthetic data are used.
These requirements apply to models released since January 1, 2022. The disclosures must also be updated for any new models or substantial modifications to existing models.
AB2013 includes exemptions for systems used solely for security and integrity, aircraft operation, or national security and defense purposes available only to federal entities. Critically, the statute does not specify how detailed what it calls a “high-level summary” must be, and the Attorney General has not yet issued guidance or initiated enforcement. The statute includes no standalone enforcement provision. Enforcement would likely proceed through California’s Unfair Competition Law, likely at the discretion of the Attorney General.
The Fifth Amendment Claim: Trade Secrets
The Fifth Amendment’s Takings Clause prohibits the government from taking private property without just compensation. xAI argues that information about its training datasets constitutes protected trade secrets and that AB2013 affects an unconstitutional taking by forcing public disclosure. The complaint advances both a per se takings theory and a regulatory takings theory, asserting interference with xAI’s reasonable investment-backed expectations.
The Supreme Court has recognized that trade secrets can be property for Takings Clause purposes. Whether a taking occurs turns on whether the trade secrets’ owner had a reasonable expectation of confidentiality based on the state of applicable laws and regulations at the time the information was developed. AB2013 applies retroactively to models released before the statute was enacted, which could strengthen a takings claim compared to a regime where transparency obligations were known in advance.
At the same time, xAI’s Fifth Amendment claim depends on whether AB2013 actually requires the disclosure of information that qualifies as a trade secret. Trade secret protection generally depends on whether the business or technical information at issue derives independent economic value from not being publicly known and is subject to reasonable efforts to maintain its secrecy. That inquiry is necessarily fact-specific, and it depends on what level of detail AB2013 ultimately requires developers to disclose.
That analysis is informed by how AB2013 is being implemented in practice. OpenAI and Anthropic have already posted AB2013 disclosures that appear to be high-level and general—OpenAI’s particularly so. If the California Attorney General takes the position, whether explicitly or implicitly, that those disclosures satisfy the statute, that would substantially weaken any claim that compliance necessarily requires revealing proprietary or economically valuable information. In that case, xAI would likely bear the burden of showing that its own disclosures would be materially different such that compliance would diminish the value of its trade secrets in a way not shared by its competitors.
These issues are not unique to AB2013. Other state and federal proposals, such as California’s SB53 and New York’s recently signed RAISE Act, also involve disclosure obligations that may call for sensitive commercial information. Unlike AB2013, which explicitly mandates public disclosure, other AI regulations may rely on disclosures directed to regulators rather than the public and explicit limits on public release. For example, SB53 explicitly permits AI developers to redact trade secret information from public disclosures and excludes trade secret information from the government’s public reports based on submitted data. While those provisions don’t eliminate all trade secret concerns and may undercut some transparency objectives, they function as a safety valve that can also reduce exposure to trade secret takings claims.
Still, the underlying question—how to balance transparency objectives against trade secret protections—will keep coming up as state and federal AI laws and regulations continue to develop.
The First Amendment Claim: A Potentially Broader Challenge to Disclosure Mandates
The complaint also argues that AB2013 violates the First Amendment by compelling speech. Under existing Supreme Court precedent from a case called Zauderer, the government may generally require disclosure of “purely factual and non-controversial information” under the more deferential standard of rational basis review.
At this stage, xAI first contends that AB2013 is a content-based regulation triggering heightened scrutiny, pointing to the statute’s exemptions. That argument appears weak: purpose-based exemptions for security and defense applications do not obviously constitute viewpoint or content discrimination. xAI also suggests that AB2013 was motivated, at least in part, by concerns about bias in AI systems and therefore implicates politically controversial issues. This theory draws on case law that treats certain mandated disclosures, such as those imposed on crisis pregnancy centers, as outside the category of “purely factual and non-controversial” speech. Notably, however, the statute itself does not require reporting on bias or anti-bias measures and instead focuses narrowly on the sources and technical characteristics of training data.
More broadly, xAI argues that the Supreme Court’s Zauderer doctrine should be narrowed so that it doesn’t apply to statutes like AB2013 at all. Specifically, xAI urges limiting that doctrine to disclosures aimed at preventing consumer deception in advertising, or, alternatively, speech that “proposes a commercial transaction.”
These arguments would, if accepted, call into question many proposed AI transparency requirements, including those in California’s SB 53 and New York’s recently signed RAISE Act. The same logic would extend beyond AI, potentially constraining disclosure requirements that are common across financial, environmental, and health and safety regulations. In fact, the Supreme Court recently declined to revisit the scope of disclosure doctrine in litigation over graphic cigarette warning requirements, leaving intact lower court decisions that upheld disclosure mandates on the ground that they were “purely factual and non-controversial,” while rejecting further limits on Zauderer.
Overall, xAI’s First Amendment theories rest on areas where First Amendment law is not fully settled—and appear aimed more at appeals courts (or even ultimately the Supreme Court).
Bottom Line
xAI’s lawsuit raises constitutional arguments that are likely to recur as governments pursue AI transparency and oversight. That makes the case worth following regardless of the ultimate outcome. At the same time, xAI’s specific claims in this lawsuit face significant hurdles. The Fifth Amendment claim depends on whether AB2013 requires the disclosure of valuable trade secrets—but Anthropic and OpenAI have already published AB2013 disclosures without apparent difficulty. The First Amendment claim, meanwhile, seeks to narrow the government’s ability to mandate factual commercial disclosures. If accepted, xAI’s position would have implications well beyond this statute—potentially calling into question a range of recently enacted and proposed AI transparency regimes, as well as other regulations beyond AI. As such, even if AB2013 itself proves limited or short-lived, xAI’s lawsuit previews important legal issues that will shape future AI regulation.
Legal Obstacles to Implementation of the AI Executive Order
About a month ago, I published an initial analysis of a leaked draft of an AI-related executive order that was rumored to be forthcoming. For a few weeks thereafter, it looked as if the draft might not actually make it past the President’s desk, or as if the final version of the executive order might be substantially altered from the aggressive and controversial draft version. On December 11, 2025, however, President Trump signed an executive order that substantially resembled the leaked draft.
Because the executive order (EO) is virtually identical to the leaked draft in terms of its substance, my analysis of the legal issues raised by that draft remains applicable. But since I published that first analysis on November 20, LawAI has had a chance to conduct further research into some of the questions that I wasn’t able to definitively resolve in that first commentary. Additionally, intervening events have provided some important context for understanding what the consequences of the executive order will be for AI policy in the U.S. Accordingly, I’ve decided to publish this updated commentary, which incorporates most of the previous piece’s analysis as well as the results of subsequent research.
What Does the Executive Order Purport to Do?
As an initial matter, it’s important to understand what an executive order is and what legal effect executive orders have in the United States. An executive order is not a congressionally enacted statute or “law.” While Congress undoubtedly has the authority to preempt some state AI laws by passing legislation, the President generally cannot unilaterally preempt state laws by presidential fiat (nor does the EO purport to do so). What an executive order can do is to publicly announce the policy goals of the executive branch of the federal government, and announce directives from the President to executive branch officials and agencies. So, contrary to what some headlines seem to suggest, the EO does not, and could not, preempt any state AI laws. It does, however, instruct a number of actors within the executive branch to take various actions intended to make it easier to preempt state AI laws in the future, or to make it more difficult for states to enact or enforce AI laws that are inconsistent with the White House’s policy positions.
It’s also worth noting that the EO’s title, “Ensuring a National Policy Framework for Artificial Intelligence,” should not be taken at face value. Because the idea of passing federal AI policy is considerably more popular than the idea of preventing states from enacting AI policy, the use of the term “federal framework” or some equivalent phrase as a euphemism for preemption of state AI laws has become something of a trend among preemption advocates in recent months, and the EO is no exception. While the EO does discuss the need for Congress to pass a national policy framework for AI, and while sections 6 and 8 do contemplate the creation of affirmative federal policies, the EO’s primary goal is clearly the elimination of undesirable state AI laws rather than the creation of federal policy.
The EO discussed in this commentary, which is titled “Ensuring A National Policy Framework For Artificial Intelligence,” is relatively short, clocking in at just under 1400 words, and consists of nine sections. This commentary summarizes the EO’s content, discusses the differences between the final version and the draft that leaked in November, and then briefly analyzes a few of the most important legal issues raised by the EO. This commentary is not intended to be comprehensive, and LawAI may publish additional commentaries and/or updates as events progress and additional legal issues come to light.
The EO’s nine sections are:
- Two prefatory policy statements, §§ 1–2, which consist of a four-paragraph “Purpose” section announcing the policy justifications for the order and a one-sentence “Policy” section that contains a high-level summary of the White House’s AI policy position.
- The gist of the Purpose section is that AI is a promising technology and that AI innovation is crucial to U.S. “national and economic security and dominance across many domains,” but that a “patchwork” of excessive state regulations threatens to suffocate innovation. The Purpose section also indicates that Congress should legislate to create a “minimally burdensome national standard” for AI regulation, but that the executive branch intends to get a head start on getting rid of burdensome state laws in the meantime.
- The Policy section reads: “It is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”
- Section 3, which directs the U.S. Attorney General (AG) to establish an AI Litigation Task Force within the Department of Justice (DOJ), assigned to file lawsuits challenging state AI laws deemed by the AG to be unlawful. The EO suggests that the Task Force will challenge state laws that allegedly violate the dormant commerce clause and state laws that are allegedly preempted by existing federal regulations. The Task Force is also authorized to challenge state AI laws under any other legal basis that DOJ can come up with.
- Section 4, which directs the Department of Commerce to publish an “evaluation” of state AI laws, including a list of “onerous” laws. Presumably this will include Colorado’s controversial AI law, SB 24-205, which is specifically called out in § 1 of the EO, along with other state laws not mentioned. This list is supposed to inform the efforts contemplated in other sections of the EO by identifying laws that should be challenged or otherwise attacked.
- Section 5, which contains two subsections that direct agencies to withhold federal grant funding from states that enact or enforce AI laws contrary to the EO’s policy goals. Subsection (a) indicates that the Department of Commerce will attempt to withhold non-deployment BEAD funding “to the maximum extent allowed by Federal law” from states with AI laws on the § 4 “onerous” list. Subsection (b) indicates that all federal agencies will assess their discretionary grant programs and determine whether existing or future grants can be withheld from states with AI laws that are challenged under § 3 or included in the § 4 list.
- Section 6, which instructs the Federal Communications Commission, in consultation with AI czar David Sacks, to start a process for determining whether to adopt a federal AI transparency standard that would preempt state AI transparency laws.
- Section 7, which directs the FTC to issue guidance arguing that certain state AI laws (presumably including, but not necessarily limited to, Colorado’s AI Act and other “woke” / “algorithmic discrimination” laws) are preempted by the FTC Act’s prohibition on deceptive commercial practices.
- Section 8, which instructs AI czar David Sacks and Office of Science and Technology Policy (OSTP) director Michael Kratsios to prepare a “legislative recommendation” to be submitted to Congress. This recommendation is supposed to lay out a “uniform Federal policy framework for AI that preempts State AI laws that conflict with the policy set forth in this order.”
- Section 9, which contains miscellaneous housekeeping provisions concerning how the EO should be interpreted and published.
How Does the Published Executive Order Differ from the Draft Version that Was Leaked in November?
As noted above, the EO is extremely substantively similar to the draft that leaked in November. There are, however, a number of sentence-level changes, most of which were presumably made for reasons of style and clarity. The published EO also includes a few changes that are arguably significant for signaling reasons—that is, because of what they seem to say about the White House’s plan for implementing the EO.
Most notably, Section 1 (the discussion of the EO’s “Purpose”) has been toned down in a few different ways. The initial draft specifically criticized both Colorado’s controversial algorithmic discrimination law and California’s Transparency in Frontier Artificial Intelligence Act (also known as SB 53), and dismissively referred to the “purely speculative suspicion that AI might ‘pose significant catastrophic risk.’” The leaked draft also suggested that “sophisticated proponents of a fear-based regulatory capture strategy” were responsible for these laws. The published version still criticizes the Colorado law, but does not contain any reference to SB 53, catastrophic risk, or regulatory capture. In light of this revision, it’s possible that SB 53—which is, by most accounts, a light-touch, non-burdensome transparency law that merely requires developers to create safety protocols of the sort that every frontier developer already creates and publishes—will not be identified as an “onerous” state AI law and targeted pursuant to the EO’s substantive provisions. To be clear, I think it’s still quite likely that SB 53 and similar transparency laws like New York’s RAISE Act will be targeted, but the removal of the explicit reference reduces the likelihood of that from “virtually certain” to “merely probable.”
This change seems like a win, albeit a minor one, for “AI safety” types and others who worry about future AI systems creating serious risks to national security and public safety. The AI Action Plan that the White House released in July seemed to take the prospect of such risks quite seriously, so the full-throated dismissal in the leaked draft would have been a significant change of course.
The published EO also throws a bone to child safety advocates, which may also be significant for signaling reasons. It was somewhat surprising that the leaked draft did not contain any reference to child safety, because child safety is an issue that voters and activists on both sides of the aisle care about a great deal. The political clout wielded by child safety advocates is such that Republican-led preemption efforts have typically included some kind of explicit carve-out or concession on the issue. For example, the final revision of the moratorium that ended up getting stripped out of the Big Beautiful Bill in late June attempted to carve out an exception for state laws relating to “child online safety,” and Dean Ball’s recent preemption proposal similarly attempts to placate child safety advocates by acknowledging the importance of the issue and adding transparency requirements specifically intended to protect children.
The published EO mentions children’s safety in § 1 as one of the issues that federal AI legislation should address. And § 8, the “legislative proposal” section, states that the proposal “shall not propose preempting otherwise lawful State AI laws relating to… child safety protections.” Much has been made of this carve-out in the online discourse, and it does seem important for signaling reasons. If the White House’s legislative proposal won’t target child safety laws, it seems reasonable to suggest that other White House efforts to eliminate certain state AI laws might steer clear of child safety laws as well. However, it’s worth noting that the exception in § 8 applies only to § 8, and not to the more important sections of the EO such as § 3 and § 5.
This leaves open the possibility that the Litigation Task Force might sue states with AI-related child safety laws, or that federal agencies might withhold discretionary grant funds from such states. Some right-leaning commentators have suggested that this is not a realistic possibility, because federal agencies will use their discretion to avoid going after child safety laws regardless of whether the EO specifically requires them to do so. It should be noted, however, that the category of “child safety laws” is broad and poorly defined, and that many of the state laws that the White House most dislikes could be reframed as child safety laws or amended to focus on child safety. In other words, a blanket policy of leaving “child safety” laws alone may not be feasible, or may not be attractive to the White House.
As for § 8 itself, a legislative proposal is just that—a proposal. It has no legal effect unless it is enacted into law by Congress. Congress can simply not enact the proposal into law—and given how rare it is for federal legislation (even legislation supported by the President) to actually be enacted, this is by far the most likely outcome. The White House has already thrown its weight behind a number of preemption-related legislative proposals in the past, and so far none of these proposals have managed to make it through Congress. It’s possible that the legislative proposal contemplated in § 8 will fare better, but the odds are not good. In my opinion, therefore, the child safety exception in § 8 is significant mostly because of what it tells us about the administration’s policy preferences rather than because of anything that it actually does.
The final potentially significant change relates to § 5(b), which contemplates withholding federal grant funding from states that regulate AI. In the leaked draft, that section directed federal agencies to review their discretionary grants to see if any could lawfully be withheld from states with AI laws designated as “onerous.” The published EO directs agencies to do the same, but directs them to do so “in consultation with” AI czar David Sacks. It remains to be seen whether this change will mean anything in practice—David Sacks is one man, and a very busy man at that, and may not have the staffing support that would realistically be needed to meaningfully review every agency’s response to the EO. But whatever that “in consultation with” ends up meaning in practice, it seems plausible to suggest that the change may at least marginally increase agencies’ willingness to withhold fund.
Issue 1: The Litigation Task Force
The EO’s first substantive section, § 3, would instruct the U.S. Attorney General to “establish an AI Litigation Task Force” charged with bringing lawsuits in federal court to challenge allegedly unlawful state AI laws. The EO suggests that the Task Force will challenge state laws that allegedly violate the dormant commerce clause and state laws that are allegedly preempted by existing federal regulations. The Task Force is also authorized to challenge state AI laws under any other legal basis that the Department of Justice (DOJ) can identify.
Dormant commerce clause arguments
Presumably, the EO’s reference to the commerce clause refers to the dormant commerce clause argument laid out by Andreessen Horowitz in September 2025. This argument, which a number of commentators have raised in recent months, suggests that certain state AI laws violate the commerce clause of the U.S. Constitution because they impose excessive burdens on interstate commerce.
LawAI’s analysis indicates that this commerce clause argument, at least with respect to the state laws most commonly cited as potential preemption targets, is legally dubious and unlikely to succeed in court. We intend to publish a more thorough analysis of this issue in the coming weeks in addition to the overview included here.
In 2023, the Supreme Court issued an important dormant commerce clause opinion in the case of National Pork Producers Council v. Ross. The thrust of the majority opinion in that case, authored by Justice Gorsuch, is that state laws generally do not violate the dormant commerce clause unless they involve purposeful discrimination against out-of-state economic interests in order to favor in-state economic interests.
Even proponents of this dormant commerce clause argument typically acknowledge that the state AI laws they are concerned with generally do not discriminate against out-of-state economic interests. Therefore, they often ignore Ross, or cite the dissenting opinions while ignoring the majority. Their preferred precedent is Pike v. Bruce Church, Inc., a 1970 case in which the Supreme Court held that a state law with “only incidental” effects on interstate commerce does not violate the dormant commerce clause unless “the burden imposed on such commerce is clearly excessive in relation to the putative local benefits.” This standard opens the door for potential challenges to nondiscriminatory laws that arguably impose a “clearly excessive” burden on interstate commerce.
The state regulation that was invalidated in Pike would have required cantaloupes grown in Arizona to be packed and processed in Arizona as well. The only state interest at stake was the “protect[ion] and enhance[ment] of [cantaloupe] growers within the state.” The Court in Pike specifically acknowledged that “[w]e are not, then, dealing here with state legislation in the field of safety where the propriety of local regulation has long been recognized.”
Even under Pike, then, it’s hard to come up with a plausible argument for invalidating the state AI laws that preemption advocates are most concerned with. Andreessen Horowitz’s argument is that the state proposals in question, such as New York’s RAISE Act, “purport to have significant safety benefits for their residents,” but in fact “are unlikely” to provide substantial safety benefits. But this is, transparently, a policy judgment, and one with which the state legislature of New York evidently disagrees. As Justice Gorsuch observes in Ross, “policy choices like these usually belong to the people and their elected representatives. They are entitled to weigh the relevant ‘political and economic’ costs and benefits for themselves, and ‘try novel social and economic experiments’ if they wish.” New York voters overwhelmingly support the RAISE Act, as did an overwhelming majority of New York’s state legislature when the bill was put to a vote. In my opinion, it is unlikely that any federal court will presume to override those policy judgments and substitute its own.
That said, it is possible to imagine a state AI law that would violate the dormant commerce clause. For example, a law that placed burdensome requirements on out-of-state developers while exempting in-state developers, in order to grant an advantage to in-state AI companies, would likely be unconstitutional. Since I haven’t reviewed every state AI bill that has been or will be proposed, I can’t say for sure that none of them would violate the dormant commerce clause. It is entirely possible that the Task Force will succeed in invalidating one or more state laws via a dormant commerce clause challenge. It does seem relatively safe, however, to predict that the specific laws referred to in the executive order and the state frontier AI safety laws most commonly referenced in discussions of preemption would likely survive any dormant commerce clause challenges brought against them.
State laws preempted by existing federal regulations
Section 3 also specifically indicates that the AI Litigation Task Force will challenge state laws that “are preempted by existing Federal regulations.” It is possible for state laws to be preempted by federal regulations, and, as with the commerce clause issue discussed above, it’s possible that the Task Force will eventually succeed in invalidating some state laws by arguing that they are so preempted.
In the absence of significant new federal AI regulation, however, it is doubtful whether many of the state laws the EO is intended to target will be vulnerable to this kind of legal challenge. Moreover, any state AI law that created significant compliance costs for companies and was plausibly preempted by existing federal regulations could be challenged by the affected companies, without the need for DOJ intervention. The fact that (to the best of my knowledge) no such lawsuit has yet been filed challenging the most notable state AI laws indicates that the new Task Force will likely be faced with slim pickings, at least until new federal regulations are enacted and/or state regulation of AI intensifies.
Alternative grounds
Section 3 also authorizes the Task Force to challenge state AI laws that are “otherwise unlawful” in the Attorney General’s judgment. The Department of Justice employs a great number of smart and creative lawyers, so it’s impossible to say for sure what theories they might come up with to challenge state AI laws. That said, preemption of state AI laws has been a hot topic for months now, and the best theories that have been publicly floated for preemption by executive action are the dormant commerce clause and Communications Act theories discussed above. This is, it seems fair to say, a bearish indicator, and I would be somewhat surprised if the Task Force managed to come up with a slam dunk legal argument for broad-based preemption that has hitherto been overlooked by everyone who’s considered this issue.
Issue 2: Restrictions on State Funding
Section 5 of the EO contains two subsections directing agencies to withhold federal grant funding from states that attempt to regulate AI. Subsection (a) indicates that Commerce will attempt to withhold non-deployment Broadband Equity Access and Deployment (BEAD) funding “to the maximum extent allowed by federal law” from states with AI laws listed pursuant to § 4 of the EO, which instructs the Department of Commerce to identify state AI laws that conflict with the policy directives laid out in § 1 of the EO. Subsection (b) instructs all federal agencies to assess their discretionary grant programs and determine whether existing or future grants can be withheld from states with AI laws that are challenged under § 3 or identified as undesirable pursuant to § 4.
In my view, § 5 of the EO is the provision with the most potential to affect state AI legislation. While § 5 does not attempt to actually preempt state AI laws, the threat of losing federal grant funds could have the practical effect of incentivizing some states to abandon their AI-related legislative efforts. And, as Daniel Cochrane and Jack Fitzhenry pointed out during the reconciliation moratorium fight, “Smaller conservative states with limited budgets and large rural populations need [BEAD] funds. But wealthy progressive states like California and New York can afford to take a pass and just keep enforcing their tech laws.” While politicians in deep blue states will be politically incentivized to fight the Trump administration’s attempts to preempt overwhelmingly popular AI laws even if it means losing access to some federal funds, politicians in red states may instead be incentivized to avoid conflict with the administration.
Section 5(a): Non-deployment BEAD funding
Section 5(a) of the EO is easier to analyze than § 5(b), because it clearly identifies the funds that are in jeopardy—non-deployment BEAD funding. The BEAD program is a $42.45 billion federal grant program established by Congress in 2021 for the purpose of facilitating access to reliable, high-speed broadband internet for communities throughout the U.S. A portion of the $42.45 billion total was allocated to each of 56 states and territories in June 2023 by the National Telecommunications and Information Administration (NTIA). In June 2025, the NTIA announced a restructuring of the BEAD program that eliminated many Biden-era requirements and rescinded NTIA approval for all “non-deployment” BEAD funding, i.e., BEAD funding that states intended to spend on uses other than actually building broadband infrastructure. The total amount of BEAD funding that will ultimately be classified as “non-deployment” is estimated to be more than $21 billion.
BEAD funding was previously used as a carrot and stick for AI preemption in June 2025, as part of the effort to insert a moratorium or “temporary pause” on state AI regulation into the most recent reconciliation bill. There are two critical differences between the attempted use of BEAD funding in the reconciliation process and its use in the EO, however. First, the EO is, obviously, an executive order rather than a legislative enactment. This matters because agency actions that would be perfectly legitimate if authorized by statute can be illegal if undertaken without statutory authorization. And secondly, while the final drafts of the reconciliation moratorium would only have jeopardized BEAD funding belonging to states that chose to accept a portion of $500 million in additional BEAD funding that the reconciliation bill would have appropriated, the EO would jeopardize non-deployment BEAD funding belonging to any state that attempts to regulate AI in a manner deemed undesirable under the EO.
The multibillion-dollar question here is: can the administration legally withhold BEAD funding from states because those states enact or enforce laws regulating AI? Unsatisfyingly enough, the answer to this question for now seems to be “no one knows for sure.” Predicting the outcome of a future court case that hasn’t been filed yet is always difficult, and here it’s especially difficult because it’s not clear exactly how the NTIA will go choose to implement § 5(a) in light of the EO’s requirement to withhold funds only “to the maximum extent allowed by federal law.” That said, there is some reason to believe that states would have at least a decent chance of prevailing if they sued to prevent NTIA from withholding funds from AI-regulating states.
The basic argument against what the EO asks NTIA to do is simply that Congress provided a formula for allocating BEAD program funds to states, and did not authorize NTIA to withhold those congressionally allocated funds from states in order to vindicate unrelated policy goals. The EO anticipates this argument and attempts to manufacture a connection between AI and broadband by suggesting that “a fragmented State regulatory landscape for AI threatens to undermine BEAD-funded deployments, the growth of AI applications reliant on high-speed networks, and BEAD’s mission of delivering universal, high-speed connectivity.” In my view, this is a hard argument to take seriously. It’s difficult to imagine any realistic scenario in which (for example) laws imposing transparency requirements on AI companies would have any significant effect on the ability of internet providers to build broadband infrastructure. Still, the important question is not whether NTIA has the statutory authority to withhold funds as the EO contemplates, but rather whether states will be able to actually do anything about it.
The Trump administration’s Department of Transportation (DOT) recently attempted a maneuver similar to the one contemplated in the § 5 when, in response to an executive order directing agencies to “undertake any lawful actions to ensure that so-called ‘sanctuary’ jurisdictions… do not receive access to federal funds,” the DOT attempted to add conditions to all DOT grant agreements requiring grant recipients to cooperate in the enforcement of federal immigration law. Affected states promptly sued to challenge the addition of this grant condition and successfully secured a preliminary injunction prohibiting DOT from implementing or enforcing the conditions. In early November 2025, the federal District Court for the District of Rhode Island ruled that the challenged conditions were unlawful for three separate reasons: (1) imposing the conditions exceeded the DOT’s statutory authority under the laws establishing the relevant grant programs; (2) imposing the conditions was “arbitrary and capricious,” in violation of the Administrative Procedure Act; and (3) imposing the conditions violated the Spending Clause of the U.S. Constitution. It remains to be seen whether the district court’s ruling will be upheld by a federal appellate court and/or by the U.S. Supreme Court.
The lawsuit described above should give you some idea of what to expect from a lawsuit challenging NTIA withholding of BEAD funds. It’s likely that states would make both statutory and constitutional arguments; in fact, they might even make spending clause, APA, and ultra vires (i.e., exceeding statutory authority) arguments similar to the ones discussed above. However, there are important differences between the executive actions that gave rise to that DOT case and the actions contemplated by § 5(a). For one thing, 47 U.S.C. § 1702(o) exempts the NTIA’s BEAD-related decisions from the requirements of the APA, meaning that it will likely be harder for states to challenge NTIA actions as being arbitrary and capricious. For another, 47 U.S.C. § 1702(n) dictates that all lawsuits brought under BEAD’s authorizing statute to challenge NTIA’s BEAD decisions will be subject to a standard of review that heavily favors the government. Essentially, this standard of review says that the NTIA’s decisions can’t be overturned unless they’re the result of corruption, fraud, or misconduct.
This overview isn’t the place to get too deep into the weeds on the question of whether and how states might be able to get around these statutory hurdles. Suffice it to say that there are plausible arguments to be made on both sides of the debate. For example, courts sometimes hold that a lawsuit arguing that an agency’s actions are in excess of its statutory authority do not arise “under” the statute in question and are therefore not subject to the statute’s standard of review (although this is a narrow exception to the usual rule).
Suppose that, in the future, the Department of Commerce decides to withhold non-deployment BEAD funding from states with AI laws deemed undesirable under the EO. States could challenge this decision in court and ask the court to order NTIA to release the previously allocated non-deployment funds, arguing that the withholding of funds exceeded NTIA’s authority under the statute authorizing BEAD and violated the Spending Clause. Each of these arguments seems at least somewhat plausible, on an initial analysis. Nothing in the statute authorizing BEAD appears to give the federal government unlimited discretion to withhold BEAD funds to vindicate policy goals that have little or nothing to do with access to broadband, and the course of action proposed in the EO is, arguably, impermissibly coercive in violation of the Spending Clause.
AI regulation is a less politically divisive issue than immigration enforcement, and a cynical observer might assume that this would give states in this hypothetical AI case a better chance on appeal than the states in the DOT immigration conditions case discussed above. However, the statutory hurdles discussed above may make it harder for states to prevail than it was in the DOT conditions case. It should also be noted that, regardless of whether or not states could eventually prevail in a hypothetical lawsuit, the prospect of having BEAD funding denied or delayed, perhaps for years, could be enough to discourage some states from enacting AI legislation of a type disfavored by the Department of Commerce under the EO.
Section 5(b): Other discretionary agency funding
In addition to withholding non-deployment BEAD funding, the EO would instruct agencies throughout the executive branch to assess their discretionary grant programs and determine whether discretionary grants can legally be withheld from states that have AI laws that “conflict[] with the policy of this order.”
The legality of this contemplated course of action, and its likelihood of being upheld in court, is even more difficult to conclusively determine ex ante than the legality and prospects of the BEAD withholding discussed above. The federal government distributes about a trillion dollars a year in grants to state and local governments, and more than a quarter of that money is in the form of discretionary grants (as opposed to grants from mandatory programs such as Medicaid). That’s a lot of money, and it’s broken up into a lot of different discretionary grants. It seems safe to predict that most discretionary grants will not be subject to withholding, since the one thing that all potential candidates will have in common is that Congress did not anticipate that they would be withheld in order to prevent state AI regulation. But depending on the amount of discretion Congress conferred on the agency in question, it may be that some grants can be withheld for almost any reason or for no reason at all. There may also be some grants that legitimately relate to the tech deregulation policy goals the administration is pursuing here.
It’s likely that many of the arguments against withholding grant money from AI-regulating states will be the same from one grant to another—as discussed above in the context of § 5(a), states will likely argue that withholding grant funds to vindicate unrelated policy goals violates the Spending Clause, exceeds the agency’s statutory authority, and violates the Administrative Procedure Act. These arguments will be stronger with respect to some grants and weaker with respect to others, depending on factors such as the language of the authorizing statute and the purpose for which the grant was to be awarded. At this point, therefore, there’s no way to know for sure how much money the federal government will attempt to withhold and how much (if any) it will actually succeed in withholding. Nor is it clear which states will resort to litigation and which the administration will succeed in pressuring into giving up their AI regulations without a fight. Unlike many other provisions of the EO, § 5(b) does not contain a deadline by which agencies must complete their review, so it’s possible that we won’t have a fuller picture of which grants will be in danger for many months.
Issue 3: Federal Reporting and Disclosure Standard
Section 6 of the EO instructs the FCC, in consultation with AI czar David Sacks, to “initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” It’s likely that the “conflicting state laws” referred to include state AI transparency laws such as California’s SB 53 and New York’s RAISE Act. It’s not clear from the language of the EO what legal authority this “Federal Reporting and Disclosure Standard” would be promulgated under. Under the Biden administration, the Department of Commerce’s Bureau of Industry and Security (BIS) controversially attempted to impose reporting requirements on frontier model developers under the information-gathering authority provided by § 705 of the Defense Production Act—but § 705 has historically been used by BIS rather than the FCC, and I am not aware of any comparable authority that would authorize the FCC to implement a mandatory “federal reporting and disclosure standard” for AI models.
Generally, regulatory preemption can only occur when Congress has granted an executive-branch agency authority to promulgate regulations and preempt state laws inconsistent with those regulations. This authority can be granted expressly or by implication, but the FCC has never before asserted that it possesses any significant regulatory authority (express or otherwise) over any aspect of AI development. It’s possible that the FCC is relying on a creative interpretation of its authority under the Communications Act—after the AI Action Plan discussed the possibility of FCC preemption, FCC Chairman Brendan Carr indicated that the FCC was “taking a look” at whether the Communications Act grants the FCC authority to regulate AI and preempt onerous state laws. However, commentators who have researched this issue and experts on the FCC’s legal authorities almost universally agree that “[n]othing in the Communications Act confers FCC authority to regulate AI.”
The fundamental obstacle to FCC preemption of state AI laws is that the Communications Act authorizes the FCC to regulate telecommunications services, and AI is not a telecommunications service. In the past, the FCC has sometimes suggested expansive interpretations of the Communications Act in order to claim more regulatory territory for itself, but claiming broad regulatory authority over AI would be significantly more ambitious than these (frequently unsuccessful) prior attempts. Moreover, this kind of creative reinterpretation of old statutes to create new agency authorities is much harder to get past a court today than it would have been even ten years ago, because of Supreme Court decisions eliminating Chevron deference and establishing the major questions doctrine. In his comprehensive policy paper on FCC preemption of state AI laws, Lawrence J. Spiwak (a staunch supporter of preemption) analyzes the relevant precedents and concludes that “given the plain language of the Communications Act as well as the present state of the caselaw, it is highly unlikely the FCC will succeed in [AI preemption] efforts” and that “trying to contort the Communications Act to preempt the growing patchwork of disparate state AI laws is a Quixotic exercise in futility.” Harold Feld of Public Knowledge essentially agrees with this assessment in his piece on the same topic.
Issue 4: Preemption of state laws for “deceptive practices” under the FTC Act
Section 7 of the EO directs the Federal Trade Commission (FTC) to issue a policy statement arguing that certain state AI laws are preempted by the FTC Act’s prohibition on deceptive commercial practices. Presumably, the laws which the EO intends for this guidance to target include Colorado’s AI Act, which the EO’s Purpose section accuses of “forc[ing] AI models to produce false results in order to avoid a ‘differential treatment or impact’” on protected groups, and other similar “algorithmic discrimination” laws. A policy statement on its own generally cannot preempt state laws, but it seems likely that the policy statement that the EO instructs the FTC to create would be relied upon in subsequent preemption-related regulatory efforts and/or by litigants seeking to prevent enforcement of the allegedly preempted laws in court.
While the Trump administration has previously expressed disapproval of “woke” AI development practices, for example in the recent executive order on “Preventing Woke AI in the Federal Government,” this argument that the FTC Act’s prohibition on UDAP (unfair or deceptive acts or practices in or affecting commerce) preempts state algorithmic discrimination laws is, as far as I am aware, new. During the Biden administration, Lina Khan’s FTC published guidance containing an arguably similar assertion: that the “sale or use of—for example—racially biased algorithms” would be an unfair or deceptive practice under the FTC Act. Khan’s FTC did not, however, attempt to use this aggressive interpretation of the FTC Act as a basis for FTC preemption of any state laws. In fact, as far as I can tell, the FTC has never used the FTC Act’s prohibition on deceptive acts or practices to preempt state civil rights or consumer protection laws, no matter how misguided, meaning that the approach contemplated by the EO appears to be totally unprecedented
Colorado’s AI law, SB 24-205, has been widely criticized, including by Governor Jared Polis (who signed the act into law) and other prominent Colorado politicians. In fact, the law has proven so problematic for Colorado that Governor Polis, a Democrat, was willing to cross party lines in order to support broad-based preemption of state AI laws for the sake of getting rid of Colorado’s. Therefore, an attempt by the Trump administration to preempt Colorado’s law (or portions thereof) might meet with relatively little opposition from within Colorado. It’s not clear who, if anyone, would have standing to challenge FTC preemption of Colorado’s law if Colorado’s attorney general refused to do so. But Colorado is not the only state with a law prohibiting algorithmic discrimination, and presumably the guidance the EO instructs the FTC to produce would inform attempts to preempt other “woke” state AI laws as well as Colorado’s.
If the matter did go to court, however, it seems likely that states would prevail. As bad as Colorado’s law may be (and, personally, I think it’s a pretty bad law) it’s very difficult to plausibly argue that it, or any similar state algorithmic discrimination law, requires any “deceptive act or practice affecting commerce.” The Colorado law requires developers and deployers of certain AI systems to use “reasonable care” to protect consumers from “algorithmic discrimination.” It also imposes a headache-inducing laundry list of documentation and record-keeping requirements on developers and deployers, which mostly relate to documenting efforts to avoid algorithmic discrimination. But, crucially, none of the law’s requirements appear to dictate that any AI output has to be untruthful—and regardless, creating an untruthful output need not be a “deceptive act or practice” under the FTC Act if the company provides consumers with enough information to ensure that they will not be deceived by the untruthful output.
“Algorithmic discrimination” is defined in the Colorado law to mean “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived [list of protected characteristics].” Note that only “unlawful” discrimination qualifies. FTC precedents establish that a deceptive act or practice occurs when there is a material representation, omission, or practice that is likely to mislead a consumer acting reasonably under the circumstances. The EO’s language seems to ask the FTC to argue that the prohibition on “differential impacts” will in practice require untruthful outputs because it prohibits companies from acknowledging the reality of group differences. But since only “unlawful” differential impacts are prohibited by the Colorado law, the only circumstance in which the Colorado law could be interpreted to require untruthful outputs is if some other valid and existing law already required such an output. And, again, even a requirement that did in practice encourage the creation of untruthful outputs would not necessarily result in “deception,” especially given the extensive disclosure requirements that the Colorado law includes.
AI Preemption and “Generally Applicable” Laws
Proposals for federal preemption of state AI laws, such as the moratorium that was removed from the most recent reconciliation bill in June 2025, often include an exception for “generally applicable” laws. Despite the frequency with which this phrase appears in legislative proposals and the important role it plays in the arguments of preemption advocates, however, there is very little agreement among experts as to what exactly “generally applicable” means in the context of AI preemption. Unfortunately, this means that, for any given preemption proposal, it’s often the case that very little can be said for certain about which laws will or will not be exempted.
The most we can say for sure is that the term “generally applicable” is supposed to describe a law that does not single out or target artificial intelligence specifically. Thus, a state law like California’s recently enacted “Transparency in Frontier Artificial Intelligence Act” (SB 53) would likely not be considered “generally applicable” by a court, because it imposes new requirements specifically on AI companies, rather than requirements that apply “generally” and affect AI companies only incidentally if at all.
This basic definition, however, leaves a host of important questions unanswered. What about laws that don’t specifically mention AI, but nevertheless are clearly intended to address issues created by AI systems? Tennessee’s ELVIS Act, which was designed to protect musicians from unauthorized commercial use of their voices, is one example of such a law. It prohibits the reproduction of an artist’s voice by any technological means, but the law was obviously passed in 2024 because recent advances in AI capabilities have made it possible to reproduce celebrity voices more accurately than previously. Alternatively, what about laws which were not originally intended to apply to AI systems, but which happen to place a disproportionate burden on AI systems relative to other technologies? No one knows precisely how a court would resolve the question of whether such laws are “generally applicable,” and if you asked four different people who think about AI preemption for a living you might well get four different answers. If federal preemption legislation is eventually enacted, and if an exception for “generally applicable” laws is included, this question will likely be extensively litigated—and it’s likely that different courts will come to different conclusions.
Usually, the best way to get an idea of how a court will interpret a given phrase is to look at how courts have interpreted the same phrase in similar contexts in the past. However, while there is some existing case law discussing the meaning of “generally applicable” in the context of preemption, LawAI’s research hasn’t turned up any cases that shed a great deal of light on the question of what the term would mean in the specific context of AI preemption. It’s therefore likely that we won’t have a clear idea of what “generally applicable” really means until some years from now, when courts may (or may not) have had occasion to answer the question with respect to a variety of different arguably “generally applicable” state laws.
Last updated: December 11, 2025, at 4:19 p.m. Eastern Time
The Genesis Mission Executive Order: What It Does and How it Shapes the Future of AI-Enabled Scientific Research
Summary
- The Genesis Mission EO seeks to build a federal AI-enabled science platform by directing the Department of Energy to organize, plan, and begin assembling technical needs like computing resources, AI models, and data.
- DOE will identify 20+ scientific and technology challenges, map federal compute and data resources, and demonstrate an initial capability using existing infrastructure.
- As with many EOs, the order assigns and coordinates responsibilities, but cannot itself provide new funding or legal authority, so realizing the Genesis Mission’s full vision will depend on Congress, other agencies, and private sector partners.
- The effort plays to DOE’s strengths—the national labs, high-performance computing, diverse scientific capabilities—and gives the federal government an opportunity to build capacity to understand and govern advancing AI-enabled science.
- The EO creates a timely opportunity to align federal policy with the Genesis Mission and modernize oversight for potential dual-use concerns as AI, large datasets, and automated labs accelerate scientific research.
On November 24, the White House released an Executive Order launching the Genesis Mission—a bold plan to build a unified national AI-enabled science platform linking federal supercomputers, secure cloud networks, public and proprietary datasets, scientific foundation models, and even automated laboratory systems. The Administration frames the Genesis Mission as a Manhattan Project-scale scientific effort.
The EO lays out the organizational and planning framework for the Genesis Mission and tasks the Department of Energy with assembling the resources required to launch it. Working in highly consequential scientific domains—such as biotechnology, where dual-use safety and security issues routinely arise—gives the federal government a timely opportunity to build the oversight and governance capacity that will be needed as AI-enabled science advances.
1. What the EO Actually Does
The EO directs the DOE and White House Office of Science and Technology Policy (OSTP) to spend the next year defining the scope of the Genesis Mission and proving what can be done using existing authority and appropriations. It’s important to keep in mind that an Executive Order cannot itself create new funding or new legal authority, so future steps will depend on Congressional action.
Mandated near-term tasks include:
- Identify at least twenty “science and technology challenges of national importance” that must span priority domains such as biotechnology, advanced manufacturing, critical materials, quantum computing, nuclear science, and semiconductors. DOE will start, and OSTP will expand and finalize the list.
- Inventory all relevant federal resources, including computing, data, networking, and automated experimentation capabilities.
- Define initial datasets and AI models and develop a plan with “risk-based cybersecurity measures” that will enable incorporating data from federally funded research, other agencies, academia, and approved private sector partners.
- Produce an initial demonstration of the “American Science and Security Platform,” using only currently available tools and legal authorities.
These are primarily coordination and planning tasks aimed at defining the scope of an integrated AI science platform and demonstrating what can be done with existing resources within DOE. DOE’s activities set forth in the EO appear to align with Section 50404 of the OBBBA reconciliation bill (H.R. 1), which appropriates $150 million through September 2026 to DOE for work on “transformational artificial intelligence models.” Although not referenced in the EO, Section 50404 directs DOE to develop a public-private infrastructure to curate large scientific datasets and create “self-improving” AI models with applications such as more efficient chip design and new energy technologies. DOE’s Section 50404 appropriation is the subject of an ongoing Request for Information (RFI), in which DOE is seeking input on how to structure and implement such public-private research consortia.
The EO does not itself mandate building the full system beyond DOE. Rather, these steps begin the process of assembling underlying infrastructure. The EO outlines broad interagency coordination, but key details need to be worked out, including who can access the platform, how users will be vetted, and whether it will be open to broad scientific use or limited to national security-priority domains.
In that sense, the EO is best understood as establishing the groundwork for a future AI-enabled and automated science infrastructure—while its full build-out will depend on Congress, other agencies, and private sector partnerships.
2. Who Holds the Pen
The Genesis Mission envisions centralized leadership for interagency coordination, with two primary actors:
- DOE: Responsible for identifying and assembling the technical components: supercomputers, datasets, models, automated labs.
- OSTP: Responsible for government-wide coordination through the National Science and Technology Council.
Technical leadership will likely sit with Under Secretary for Science Darío Gil, who oversees the DOE national labs and major research programs. Strategic coordination, including interactions with other agencies and industry, will likely run through Michael Kratsios, OSTP Director and Presidential Science Advisor.
The EO directs only DOE to take specific actions. What this means is that ultimately the interagency coordination is more aspirational, and likely will depend on Congressional actions to add or redirect funding to work on the Genesis Mission. At this point, the EO envisions DOE as the primary operator of the ultimate platform with OSTP shaping strategy. The practical impact of the Mission will largely depend on how these resources are ultimately shared and made accessible across agencies, which the EO leaves open for now.
3. The Goal: Accelerating High-Stakes Science
Here’s where the Genesis Mission may be most consequential. The EO envisions a platform that sits at the center of scientific domains with national and economic significance. These are areas where integrating AI models, different kinds of information from government and private databases, and being able to run lots of experiments using automation can provide high leverage.
For example, in biological research, an integrated AI-science platform could accelerate drug development, improve biomanufacturing, strengthen pandemic preparedness, tackle chronic disease, and support emerging industries that can help economic growth and allow the United States to maintain global leadership. DOE is well positioned to contribute here, given its national laboratories, high-performance computing, and experience managing large-scale scientific infrastructure.
The Genesis Mission EO suggests that the Administration expects the Mission to support research with high scientific value as well as complex security and safety considerations. While it doesn’t reference new or existing regulations, the EO requires DOE to operate the platform consistent with:
- classification rules,
- supply-chain security requirements,
- federal cybersecurity standards, and
- “uniform and stringent” data-access and data-management processes with strong vetting for external users.
A system that integrates large biological datasets, frontier-scale foundation models, and automated lab workflows could dramatically accelerate discovery. It’s important to keep in mind, however, that such capabilities can also intersect with longstanding dual-use concerns: areas where the same tools that advance beneficial research might also lower barriers to potential harms.
4. Why Governance Matters for the Genesis Mission
Biology offers a clear example of the kinds of oversight challenges that can arise as AI accelerates scientific research. AI and lab automation can lower barriers to manipulating or enhancing dangerous pathogens, which is often referred to as “gain-of-function” research.
Importantly, the launch of the Genesis Mission comes while key federal biosafety revisions are still in progress. In May, the White House issued Executive Order 14292, “Improving the Safety and Security of Biological Research.” That EO called for strengthening oversight of certain high-consequence biological research, including gain of function. It imposed several tasks on OSTP, including:
- Revise or replace the 2024 Framework for Nucleic Acid Synthesis Screening.
- Replace the withdrawn 2024 dual-use and enhanced-pathogen oversight policy.
- Develop a strategy to “govern, limit, and track” gain-of-function and dual-use biological research outside federally funded environments.
Since then, there has been partial progress towards these goals, including NIH and USDA funding bans on gain-of-function research. But several other updates called for in EO 14292 have not been finalized. The Genesis Mission creates both an opportunity and a need to advance this work. By accelerating AI-enabled scientific research, the Mission heightens the importance of clear, modernized biosafety and biosecurity guidance—and gives the Administration a natural venue to advance it.
As DOE begins integrating advanced computation, large biological datasets, and automated experimentation, it becomes even more valuable to clarify how federal guidance should apply to AI-augmented research. The Genesis Mission may ultimately help spur the release of updated oversight frameworks and encourage broader policy discussions—including potential legislation—on how to manage dual-use research in the era of integrated AI for science platforms.
These issues aren’t limited to biology either. The Genesis Mission EO names nuclear science, quantum computing, advanced materials, and other domains where AI-accelerated discovery creates both major opportunities and critical governance issues.
5. The Hard Policy Questions Ahead
At first, the Genesis Mission will likely be a largely DOE-run effort limited to federal researchers and a small group of partners. But if it grows along the ambitious lines the EO lays out, managing who can access it—and how—becomes far more challenging. Once integrated AI-driven systems can design, optimize, or automate significant parts of scientific research, regulation becomes both urgent and harder to enforce in a uniform way:
- Federal rules can clearly govern federally funded research.
- But what about private-sector scientists who combine federal AI models or datasets with independently conducted wet-lab work?
- And what about academic or corporate AI-scientific platforms—built outside federal systems—that also integrate scientific data, advanced models, and automated labs?
These private and academic systems may be entirely outside federal oversight, complicating attempts to build coherent guardrails.
If the Genesis Mission succeeds, it will generate substantial new scientific data that will help train more capable models and enable new research pathways. At the same time, access to more powerful models and broader datasets will increase the importance of developing effective policies for data governance, user access, and managing research across the government and with the private sector.
6. Bottom Line
The Genesis Mission sets an ambitious vision for a unified AI-enabled science platform within the federal government. Its success will depend on future funding, interagency participation, and sustained follow-through. But even at this early planning stage, the EO brings core policy issues to the surface: oversight, data governance, access rules, and how to manage research that cuts across agencies and private sector entities.
As DOE and OSTP begin work on the Genesis Mission, it also creates a timely opportunity for the federal government to update dual-use oversight frameworks, such as in biosafety as called for by EO 14292, and build governance structures needed for AI-accelerated science.
Legal Issues Raised by the Proposed Executive Order on AI Preemption
On November 19, 2025, a draft executive order that the Trump administration may issue as early as Friday, November 21 was publicly leaked. The six-page order consists of nine sections, including prefatory purpose and policy statements, a section containing miscellaneous “general provisions,” and six substantive provisions. This commentary provides a brief overview of some of the most important legal issues raised by the draft executive order (DEO). This commentary is not intended to be comprehensive, and LawAI may publish additional commentaries and/or updates as events progress and additional legal issues come to light.
As an initial matter, it’s important to understand what an executive order is and what legal effect executive orders have in the United States. An executive order is not a congressionally enacted statute or “law.” While Congress undoubtedly has the authority to preempt some state AI laws by passing legislation, the President generally cannot unilaterally preempt state laws by presidential fiat (nor does the DEO purport to do so). An executive order can publicly announce the policy goals of the executive branch of the federal government, and can also contain directives from the President to executive branch officials and agencies.
Issue 1: The Litigation Task Force
The DEO’s first substantive section, § 3, would instruct the U.S. Attorney General to “establish an AI Litigation Task Force” charged with bringing lawsuits in federal court to challenge allegedly unlawful state AI laws. The DEO suggests that the Task Force will challenge state laws that allegedly violate the dormant commerce clause and state laws that are allegedly preempted by existing federal regulations. The Task Force is also authorized to challenge state AI laws under any other legal basis that the Department of Justice (DOJ) can identify.
Dormant commerce clause arguments
Presumably, the DEO’s reference to the commerce clause refers to the dormant commerce clause argument laid out by Andreessen Horowitz in September 2025. This argument, which a number of commentators have raised in recent months, suggests that certain state AI laws violate the commerce clause of the U.S. Constitution because they impose excessive burdens on interstate commerce. LawAI’s analysis indicates that this commerce clause argument, at least with respect to the state laws specifically referred to in the DEO, is legally meritless and unlikely to succeed in court. We intend to publish a more thorough analysis of this issue in the coming weeks in addition to the overview included here.
In 2023, the Supreme Court issued an important dormant commerce clause opinion in the case of National Pork Producers Council v. Ross. The thrust of the majority opinion in that case, authored by Justice Gorsuch, is that state laws generally do not violate the dormant commerce clause unless they involve purposeful discrimination against out-of-state economic interests in order to favor in-state economic interests.
Even proponents of this dormant commerce clause argument typically acknowledge that the state AI laws they are concerned with generally do not discriminate against out-of-state economic interests. Therefore, they often ignore Ross, or cite the dissenting opinions while ignoring the majority. Their preferred precedent is Pike v. Bruce Church, Inc., a 1970 case in which the Supreme Court held that a state law with “only incidental” effects on interstate commerce does not violate the dormant commerce clause unless “the burden imposed on such commerce is clearly excessive in relation to the putative local benefits.” This standard opens the door for potential challenges to nondiscriminatory laws that arguably impose a “clearly excessive” burden on interstate commerce.
The state regulation that was invalidated in Pike would have required cantaloupes grown in Arizona to be packed and processed in Arizona as well. The only state interest at stake was the “protect[ion] and enhance[ment] of [cantaloupe] growers within the state.” The Court in Pike specifically acknowledged that “[w]e are not, then, dealing here with state legislation in the field of safety where the propriety of local regulation has long been recognized.”
Even under Pike, then, it’s hard to come up with a plausible argument for invalidating the state AI laws that preemption advocates are concerned with. Andreessen Horowitz’s argument is that the state proposals in question, such as New York’s RAISE Act, “purport to have significant safety benefits for their residents,” but in fact “are unlikely” to provide substantial safety benefits. But this is, transparently, a policy judgment, and one with which the state legislature of New York evidently disagrees. As Justice Gorsuch observes in Ross, “policy choices like these usually belong to the people and their elected representatives. They are entitled to weigh the relevant ‘political and economic’ costs and benefits for themselves, and ‘try novel social and economic experiments’ if they wish.” New York voters overwhelmingly support the RAISE Act, as did an overwhelming majority of New York’s state legislature when the bill was put to a vote. In my opinion, it is unlikely that any federal court will presume to override those policy judgments and substitute its own.
That said, it is possible to imagine a state AI law that would violate the dormant commerce clause. For example, a law that placed burdensome requirements on out-of-state developers while exempting in-state developers, in order to grant an advantage to in-state AI companies, would likely be unconstitutional. Since I haven’t reviewed every state AI bill that has been or will be proposed, I can’t say for sure that none of them would violate the dormant commerce clause. It is entirely possible that the Task Force will succeed in invalidating one or more state laws via a dormant commerce clause challenge. It does seem relatively safe, however, to predict that the specific laws referred to in the executive order and the state frontier AI safety laws most commonly referenced in discussions of preemption would likely survive any dormant commerce clause challenges brought against them.
State laws preempted by existing federal regulations
Section 3 of the DEO also specifically indicates that the AI Litigation Task Force will challenge state laws that “are preempted by existing Federal regulations.” It is possible for state laws to be preempted by federal regulations, and, as with the commerce clause issue discussed above, it’s possible that the Task Force will eventually succeed in invalidating some state laws by arguing that they are so preempted.
In the absence of significant new federal AI regulation, however, it is doubtful whether many of the state laws the DEO is intended to target will be vulnerable to this kind of legal challenge. Moreover, any state AI law that created significant compliance costs for companies and was plausibly preempted by existing federal regulations could be challenged by the affected companies, without the need for DOJ intervention. The fact that (to the best of my knowledge) no such lawsuit has yet been filed challenging the most notable state AI laws indicates that the new Task Force will likely be faced with slim pickings, at least until new federal regulations are enacted and/or state regulation of AI intensifies.
It seems likely that § 3’s reference to preemption via existing federal regulation is at least partially intended to refer to Communications Act preemption as discussed in the AI Action Plan. There is a major obstacle to preempting state AI laws under the Communications Act, however: the Communications Act provides the FCC (and sometimes courts) with some authority to preempt certain state laws regulating “telecommunications services” and “information services,” but existing legal precedents clearly establish that AI systems are neither “telecommunications services” nor “information services” under the Communications Act. In his comprehensive policy paper on FCC preemption of state AI laws, Lawrence J. Spiwak (a staunch supporter of preemption) analyzes the relevant precedents and concludes that “given the plain language of the Communications Act as well as the present state of the caselaw, it is highly unlikely the FCC will succeed in [AI preemption] efforts” and that “trying to contort the Communications Act to preempt the growing patchwork of disparate state AI laws is a Quixotic exercise in futility.” Harold Feld of Public Knowledge essentially agrees with this assessment in his piece on the same topic.
Alternative grounds
Section 3 also authorizes the Task Force to challenge state AI laws that are “otherwise unlawful” in the Attorney General’s judgment. The Department of Justice employs a great number of smart and creative lawyers, so it’s impossible to say for sure what theories they might come up with to challenge state AI laws. That said, preemption of state AI laws has been a hot topic for months now, and the best theories that have been publicly floated for preemption by executive action are the dormant commerce clause and Communications Act theories discussed above. This is, it seems fair to say, a bearish indicator, and I would be somewhat surprised if the Task Force managed to come up with a slam dunk legal argument for broad-based preemption that has hitherto been overlooked by everyone who’s considered this issue.
Issue 2: Restrictions on State Funding
Section 5 of the DEO contains two subsections that concern efforts to withhold federal grant funding from states that attempt to regulate AI. Subsection (a) indicates that Commerce will attempt to withhold non-deployment Broadband Equity Access and Deployment (BEAD) funding “to the maximum extent allowed by federal law” from states with AI laws listed pursuant to § 4 of the DEO, which instructs the Department of Commerce to identify state AI laws that conflict with the policy directives laid out in § 1 of the DEO. Subsection (b) instructs all federal agencies to assess their discretionary grant programs and determine whether existing or future grants can be withheld from states with AI laws that are challenged under § 3 or identified as undesirable pursuant to § 4.
In my view, § 5 of the DEO is the provision with the most potential to affect state AI legislation. While § 5 does not contain any attempt to actually preempt state AI laws, the threat of losing federal grant funds could have the practical effect of incentivizing some states to abandon their AI-related legislative efforts. And, as Daniel Cochrane and Jack Fitzhenry pointed out during the reconciliation moratorium fight, “Smaller conservative states with limited budgets and large rural populations need [BEAD] funds. But wealthy progressive states like California and New York can afford to take a pass and just keep enforcing their tech laws.” While politicians in deep blue states will be politically incentivized to fight the Trump administration’s attempts to preempt overwhelmingly popular AI laws even if it means losing access to some federal funds, politicians in red states may instead be incentivized to avoid conflict with the administration.
Section 5(a): Non-deployment BEAD funding
Section 5(a) of the DEO is easier to analyze than § 5(b), because it clearly identifies the funds that are in jeopardy—non-deployment BEAD funding. The BEAD program is a $42.45 billion federal grant program established by Congress in 2021 for the purpose of facilitating access to reliable, high-speed broadband internet for communities throughout the U.S. A portion of the $42.45 billion total was allocated to each of 56 states and territories in June 2023 by the National Telecommunications and Information Administration (NTIA). In June 2025, the NTIA announced a restructuring of the BEAD program that eliminated many Biden-era requirements and rescinded NTIA approval for all “non-deployment” BEAD funding, i.e., BEAD funding that states intended to spend on uses other than actually building broadband infrastructure. The total amount of BEAD funding that will ultimately be classified as “non-deployment” is estimated to be more than $21 billion.
BEAD funding was previously used as a carrot and stick for AI preemption in June 2025, as part of the effort to insert a moratorium or “temporary pause” on state AI regulation into the most recent reconciliation bill. There are two critical differences between the attempted use of BEAD funding in the reconciliation process and its use in the DEO, however. First, the DEO is, obviously, an executive order rather than a legislative enactment. This matters because agency actions that would be perfectly legitimate if authorized by statute can be illegal if undertaken without statutory authorization. And secondly, while the final drafts of the reconciliation moratorium would only have jeopardized BEAD funding belonging to states that chose to accept a portion of $500 million in additional BEAD funding that the reconciliation bill would have appropriated, the DEO would jeopardize non-deployment BEAD funding belonging to any state that attempts to regulate AI in a manner deemed undesirable under the DEO.
The multibillion-dollar question here is: can the administration legally withhold BEAD funding from states because those states enact or enforce laws regulating AI? I am going to cop out and say, honestly, that I don’t know for certain at this point in time. There are a number of potential legal issues with the course of action that the DEO contemplates, but as of November 20, 2025 (one day after the DEO first leaked) no one has published a definitive analysis of whether the administration will be able to overcome these obstacles.
The Trump administration’s Department of Transportation (DOT) recently attempted a maneuver similar to the one contemplated in the DEO when, in response to an executive order directing agencies to “undertake any lawful actions to ensure that so-called ‘sanctuary’ jurisdictions… do not receive access to federal funds,” the DOT attempted to add conditions to all DOT grant agreements requiring grant recipients to cooperate in the enforcement of federal immigration law. Affected states promptly sued to challenge the addition of this grant condition and successfully secured a preliminary injunction prohibiting DOT from implementing or enforcing the conditions. In early November 2025, the federal District Court for the District of Rhode Island ruled that the challenged conditions were unlawful for three separate reasons: (1) imposing the conditions exceeded the DOT’s statutory authority under the laws establishing the relevant grant programs; (2) imposing the conditions was “arbitrary and capricious,” in violation of the Administrative Procedure Act; and (3) imposing the conditions violated the Spending Clause of the U.S. Constitution. It remains to be seen whether the district court’s ruling will be upheld by a federal appellate court and/or by the U.S. Supreme Court.
Suppose that, in the future, the Department of Commerce decides to withhold non-deployment BEAD funding from states with AI laws deemed undesirable under the DEO. States could challenge this decision in court and ask the court to order NTIA to release the previously allocated non-deployment funds to the states, arguing that the withholding of funds exceeded NTIA’s authority under the statute authorizing BEAD, violated the APA, and violated the Spending Clause. Each of these arguments seems at least somewhat plausible, on an initial analysis. Nothing in the statute authorizing BEAD appears to give the federal government unlimited discretion to withhold BEAD funds to vindicate policy goals that have little or nothing to do with access to broadband; rescinding previously awarded grant funds and then withholding them in order to further goals not contemplated by Congress is at least arguably arbitrary and capricious; and the course of action proposed in the DEO is, arguably, impermissibly coercive in violation of the Spending Clause.
AI regulation is a less politically divisive issue than immigration enforcement, and a cynical observer might assume that this would give states in this hypothetical AI case a better chance on appeal than the states in the DOT immigration conditions case discussed above. However, there are a number of differences between the DOT conditions case and the course of action contemplated in the DEO that could make it harder—or easier—for states to prevail in court. Accurately estimating states’ chances of success with high confidence will take more than one day’s worth of analysis.
It should also be noted that, regardless of whether or not states could eventually prevail in a hypothetical lawsuit, the prospect of having BEAD funding denied or delayed, perhaps for years, could be enough to discourage some states from enacting AI legislation of a type disfavored by the Department of Commerce under the DEO.
Section 5(b): Other discretionary agency funding
In addition to withholding non-deployment BEAD funding, the DEO would instruct agencies throughout the executive branch to “take immediate steps to assess their discretionary grant programs and determine whether agencies may condition such grants on States either not enacting an AI law that conflicts with the policy of this order… or, for those States that have enacted such laws, on those States entering into a binding agreement with the relevant agency not to enforce any such laws during any year in which it receives the discretionary funding.”
The legality of this contemplated course of action, and its likelihood of being upheld in court, is even more difficult to conclusively determine ex ante than the legality and prospects of the BEAD withholding discussed above. The federal government distributes about a trillion dollars a year in grants to state and local governments, and more than a quarter of that money is in the form of discretionary grants (as opposed to grants from mandatory programs such as Medicaid). That’s a lot of money, and it’s broken up into a lot of different discretionary grants. It’s likely that many of the arguments against withholding grant money from AI-regulating states would be the same from one grant to another. However, it is also likely that there are some discretionary grants to states which could more reasonably be conditioned on compliance with the President’s deregulatory AI policy directives and other grants for which such conditioning would be less reasonable. Ultimately, further research into this issue is needed to determine how much state grant funding, if any, is legitimately at risk.
Issue 3: Federal Reporting and Disclosure
Section 6 of the DEO instructs the FCC, in consultation with AI czar David Sacks, to “initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” Presumably, “conflicting state laws” is intended to refer to state AI transparency laws such as California’s SB 53 and New York’s RAISE Act. It’s not clear from the language of the DEO what legal authority this “Federal reporting and disclosure standard” would be promulgated under. Under the Biden administration, the Department of Commerce’s Bureau of Industry and Security (BIS) attempted to impose reporting requirements on frontier model developers under the information-gathering authority provided by § 705 of the Defense Production Act—but § 705 has historically been used by BIS rather than the FCC, and I am not aware of any comparable authority that would authorize the FCC to implement a mandatory “federal reporting and disclosure standard” for AI models.
Generally, regulatory preemption can only occur when Congress has granted an executive-branch agency authority to promulgate regulations and preempt state laws inconsistent with those regulations. This authority can be granted expressly or by implication, but, as discussed above in the discussion of Communications Act preemption under § 3 of the DEO, the FCC has never before asserted that it possesses any significant regulatory authority (express or otherwise) over any aspect of AI development. It’s possible that the FCC is relying on a creative interpretation of its authority under the Communications Act—FCC Chairman Brendan Carr previously indicated that the FCC was “taking a look” at whether the Communications Act grants the FCC authority to regulate AI and preempt onerous state laws. However, as discussed above, legal commentators almost universally agree that “[n]othing in the Communications Act confers FCC authority to regulate AI.”
It’s possible that the language of the EO is simply meant to indicate that the FCC and Sacks will suggest a standard that may then be enacted into law by Congress. This would certainly overcome the legal obstacles discussed above, and could (depending on the language of the statute) allow for preemption of state AI transparency laws. However, it would require passing new federal legislation, which is easier said than done.
Issue 4: Preemption of state laws for “deceptive practices” under the FTC Act
Section 7 of the DEO directs the Federal Trade Commission (FTC) to issue a policy statement arguing that certain state AI laws are preempted by the FTC Act’s prohibition on deceptive commercial practices. Presumably, the laws which the DEO intends for this guidance to target include Colorado’s AI Act, which the DEO’s Purpose section accuses of “forc[ing] AI models to embed DEI in their programming, and to produce false results in order to avoid a ‘differential treatment or impact’…” on enumerated demographic groups, and other similar “algorithmic discrimination” laws. A policy statement on its own generally cannot preempt state laws, but it seems likely that the policy statement that the DEO instructs the FTC to create would be relied upon in subsequent preemption-related regulatory efforts and/or by litigants seeking to prevent enforcement of the allegedly preempted laws in court.
While the Trump administration has previously expressed disapproval of “woke” AI development practices, for example in the recent executive order on “Preventing Woke AI in the Federal Government,” this argument that the FTC Act’s prohibition on UDAP (unfair or deceptive acts or practices in or affecting commerce) preempts state algorithmic discrimination laws is, as far as I am aware, new. During the Biden administration, Lina Khan’s FTC published guidance containing an arguably similar assertion: that the “sale or use of—for example—racially biased algorithms” would be an unfair or deceptive practice under the FTC Act. Khan’s FTC did not, however, attempt to use this aggressive interpretation of the FTC Act as a basis for FTC preemption of any state laws.
Colorado’s AI statute has been widely criticized, including by Governor Jared Polis (who signed the act into law) and other prominent Colorado politicians. In fact, the law has proven so problematic for Colorado that Governor Polis, a Democrat, was willing to cross party lines in order to support broad-based preemption of state AI laws for the sake of getting rid of Colorado’s. Therefore, an attempt by the Trump administration to preempt Colorado’s law (or portions thereof) might meet with relatively little opposition from within Colorado. It’s not clear who, if anyone, would have standing to challenge FTC preemption of Colorado’s law if Colorado’s attorney general refused to do so. But Colorado is not the only state with a law prohibiting algorithmic discrimination, and presumably the guidance the DEO instructs the FTC to produce would inform attempts to preempt other “woke” state AI laws as well as Colorado’s.
The question of how those attempts would fare in federal court is an interesting one, and I look forward to reading analysis of the issue from commentators with expertise regarding the FTC Act and algorithmic discrimination laws. Unfortunately, I am not such a commentator and will therefore plead ignorance on this point.
Ten Highlights of the White House’s AI Action Plan
Today, the White House released its AI Action Plan, laying out the administration’s priorities for AI innovation, infrastructure, and adoption. Ultimately, the value of the Plan will depend on how it is operationalized via executive orders and the actions of executive branch agencies, but the Plan itself contains a number of promising policy recommendations. We’re particularly excited about:
- The section on federal government evaluations of national security risks in frontier models. This section correctly identifies the possibility that “the most powerful AI systems may pose novel national security risks in the near future,” potentially including risks from cyberattacks and risks related to the development of chemical, biological, radiological, nuclear, or explosive (CBRNE) weapons. Ensuring that the federal government has the personnel, expertise, and authorities needed to guard against these risks should be a bipartisan priority.
- The discussion of interpretability and control, which recognizes the importance of interpretability to the use of advanced AI systems in national security and defense applications. The Plan also recommends three policy actions for advancing the science of interpretability, each of which seems useful for frontier AI security in expectation.
- The overall focus on standard-setting by the Center for AI Standards and Innovation (CAISI, formerly known as the AI Safety Institute) and other government agencies, in partnership with industry, academia, and civil society organizations.
- The recommendation on building an AI evaluations ecosystem. The science of evaluating AI systems’ capabilities is still in its infancy, but the Plan identifies a few promising ways for CAISI and other government agencies to support the development of this critical field.
- The emphasis on physical and cybersecurity for frontier labs and bolstering critical infrastructure cybersecurity. As Leopold Aschenbrenner pointed out in “Situational Awareness,” AI labs are not currently equipped to protect their model weights and algorithmic secrets from being stolen by China or other geopolitical rivals of the U.S., and fixing this problem is a crucial national security imperative.
- The call to improve the government’s capacity for AI incident response. Advanced planning and capacity-building are crucial for ensuring that the government is prepared to respond in the event of an AI emergency. Incident response preparation is an effective way to increase resiliency without directly burdening innovation.
- The section on how the legal system should handle deceptive AI-generated “evidence.” Legal rules often lag behind technological development, and the guidance contemplated here could be highly useful to courts that might otherwise be unprepared to handle an influx of unprecedentedly convincing fake evidence.
- The recommendations for ramping up export control enforcement and plugging loopholes in existing semiconductor export controls. Compute governance—preventing geopolitical rivals from gaining access to the chips needed to train cutting-edge frontier AI models—continues to be an effective policy tool for maintaining the U.S.’s lead in the race to develop advanced AI systems before China.
- The suggested regulatory sandboxes, which could enable AI adoption and increase the AI governance capacity of sectoral regulatory agencies like the FDA and the SEC.
- The section on deregulation wisely rejects the maximalist position of the moratorium that was stripped from the recent reconciliation bill by a 99-1 Senate vote. Instead of proposing overbroad and premature preemption of virtually all state AI regulations, the Plan recommends that AI-related federal funding should not “be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.”
- At the moment, it’s hard to identify any significant source of “AI-related federal funding” to states, although this could change in the future. This being the case, it will likely be difficult for the federal government to offer states any significant inducement towards deregulation unless it first offers them new federal money. And disincentivizing truly “burdensome” state regulations that would interfere with the effectiveness of federal grants seems like a sensible alternative to broader forms of preemption.
- The Plan also seems to suggest that the FCC could preempt some state AI regulations under § 253 of the Communications Act of 1934. It remains to be seen whether and to what extent this kind of preemption is legally possible. At first glance, however, it seems unlikely that the FCC’s authority to regulate telecommunications services could legally be used for any especially broad preemption of state AI laws. Any broad FCC preemption under this authority would likely have to go through notice and comment procedures and might struggle to overcome legal challenges from affected states.
Two Byrd Rule Problems With the AI Moratorium
Note: this commentary was drafted on June 26, 2025, as a memo not intended for publication; we’ve elected to publish it in case the analysis laid out here is useful to policymakers or commentators following ongoing legislative developments regarding the proposed federal moratorium on state AI regulation. The issues noted here are relevant to the latest version of the bill as of 2:50 p.m. ET on June 30, 2025.
Two Byrd Rule issues have emerged, both of which should be fixed. It appears that the Parliamentarian has not ruled on either.
Effects on existing BEAD funding
The Parliamentarian may have already identified the first Byrd Rule issue: the plain text of the AI Moratorium would affect all $42.45 Billion in BEAD funding, not just the newly allocated $500 Million. It is not 100% certain that a court would read the statute this way, but it is the most likely outcome. We analyzed this problem in a recently published commentary. This issue could be fixed via an amendment.
Private enforcement of the moratorium
In that same article, we flagged a second issue that also presents a Byrd Rule issue: the AI Moratorium seemingly creates private enforcement rights in private parties. That’s a problem under the Byrd Rule, because the AI Moratorium must be a “necessary term or condition” of an outlay. A private enforcement right cannot be characterized as a necessary term or condition of an outlay that does not concern those third parties. This can be fixed by clarifying that the only enforcement mechanism is withdrawal or denial of the new BEAD funding.
The text at issue – private enforcement of the moratorium
The plain text of the moratorium, and applicable legal precedents, likely empower private parties to enforce the moratorium in court. Stripping the provision down to its essentials, subsection (q) states that “no eligible entity or political subdivision thereof . . . may enforce . . . any law or regulation . . . limiting, restricting or otherwise regulating artificial intelligence models, [etc.].” That sounds like prohibition. It doesn’t mention the Department of Commerce. Nor does it leave it to the Secretary’s discretion whether that prohibition applies. If states satisfy the criteria, they likely are prohibited from enforcing AI laws.
Nothing in the proposed moratorium or in 47 U.S.C. § 1702 generally provides that the only remedy for a violation of the moratorium is deobligation of obligated funds by the Assistant Secretary of Commerce for Communications and Information. And when comparable laws—e.g. the Airline Deregulation Act, 49 U.S.C. § 41713—have used similar language to expressly preempt state AI laws, courts have interpreted this as authorizing private parties to sue for an injunction preventing enforcement of preempted state laws. See, for example, Morales v. Trans World Airlines, Inc., 504 U.S. 374 (1992).
What would happen – private lawsuits to enforce the moratorium
Private parties could vindicate this right in one of two ways. First, if a private party (e.g. an AI company) fears that a state will imminently sue it for violating that state’s AI law, the private party could seek a declaratory judgment in federal court. Second, if the state actually sues the private party, that party could raise the moratorium as a defense to that lawsuit. If the private party is based in the same state, that defense would be heard in state court, and could result in dismissal of the state’s claims; if the party is from out-of-state, the claim would be removed to federal court, where a judge could also throw out the state’s claims.
Why it’s a Byrd Rule problem – private rights are not “terms or conditions”
The AI Moratorium must be a “necessary term or condition” of an outlay. In this case, promising not to enforce AI laws is a valid “term or condition” of the grant. Passively opening oneself up to lawsuits and defenses by private parties is not. Those lawsuits occur far after states take the money, are outside their control, and involve the actions of individuals who are not parties to the grant agreement. They also have significant effects unrelated to spending: binding the actions of states and invalidating laws in ways completely separate from the underlying transaction between the Department of Commerce and the states. It is perfectly compatible with the definition of “terms and conditions” for the Department of Commerce to deobligate funds if the terms of its grant are violated. It is an entirely different thing to create a defense or cause of action for third parties and to allow those parties to interfere with the enforcement power of states. The creation of rights for a third party, uninvolved in the delivery or receipt of an outlay cannot be considered a necessary term or condition.
The AI Moratorium—the Blackburn Amendment and New Requirements for “Generally Applicable” Laws
Published: 9:55 pm ET on June 29, 2025
Last updated: 10:28 pm ET on June 29, 2025
The latest version of the AI moratorium has been released, with some changes to the “rule of construction.” We’ve published two prior commentaries on the moratorium (both of which are still relevant, because the updated text has not addressed the issues noted in either). The new text:
- Shortens the “temporary pause” from 10 to 5 years;
- Attempts to exempt laws addressing CSAM, childrens’ online safety, and rights to name/likeness/voice/image—although the amendment seemingly fails to protect the laws its drafters intend to exempt; and
- Creates a new requirement that laws do not create an “undue or disproportionate burden,” which is likely to generate significant litigation.
The amendment tries to protect state laws on child sexual abuse materials and recording artists, but likely fails to do so.
The latest text appears to be drafted specifically to address the concerns of Senator Marsha Blackburn, who does not want the moratorium to apply to state laws affecting recording artists (like Tennessee’s ELVIS Act) and laws affecting child sexual abuse material (CSAM). But while the amended text lists each of these categories of laws as specific examples of “generally applicable” laws or regulations, the new text only exempts those laws if they do not impose an “undue or disproportionate burden” on AI models, systems, or “algorithmic decision systems,” as defined in the moratorium, in order to “reasonably effectuate the broader underlying purposes of the law or regulation.”
However, laws like the ELVIS Act likely have a disproportionate burden on AI systems. They almost exclusively target AI systems and outputs, and the effect of the law will almost entirely be borne by AI companies. While trailing qualifiers always vex courts, the fact that “undue or disproportionate burden” is separated from the preceding list by a comma strongly suggests that it qualifies the entire list and not just “common law.” Common sense also counsels in favor of this reading: it’s unlikely that an inherently general body of law (like common law) would place a disproportionate burden on AI, while legislation like the ELVIS act absolutely could (and likely does). As we read the new text, the most likely outcome is that the laws Senator Blackburn wants to protect would not be protected.
Even if other readings are possible, this “disproportionate” language would almost certainly create litigation if enacted, with companies challenging whether the ELVIS Act and CSAM laws are actually exempted. As we have previously noted, the moratorium will likely be privately enforceable—meaning that any company or individual against whom a state attempts to enforce a state law or regulation will be able to sue to prevent enforcement.
The newly added “undue or disproportionate burden” language creates an unclear standard (and will likely generate extensive litigation)
The problem discussed above extends beyond the specific laws that Senator Blackburn wishes to protect. Previously, “generally applicable” laws were exempted. Under the new language, laws that address AI models/systems or “automated decision systems” can be exempted, but only if they do not place an “undue or disproportionate burden” on said models/systems. The effect of the new “undue or disproportionate burden” language will likely be to generate additional litigation and uncertainty. It may also make it more likely that some generally applicable laws, such as facial recognition laws or data protection laws, will no longer be exempt because they may place a disproportionate burden on AI models/systems.
Other less significant changes
Previously, subsection (q)(2)(A)(ii) excepted any law or regulation “the primary purpose and effect of which is to… streamline licensing, permitting, routing, zoning, procurement, or reporting procedures in a manner that facilitates the adoption of [AI models/systems/automated decision systems].” As amended, the relevant provision now excepts any law or regulation “the primary purpose and effect of which is to… streamline licensing, permitting, routing, zoning, procurement, or reporting procedures related to the adoption or deployment of [AI models/systems/automated decision systems].” This amended language is slightly broader than the original, but the difference does not seem highly significant.
Additionally, the structure of the paragraphs has been adjusted slightly, likely to make clear that subparagraph (B) (which requires that any fee or bond imposed by any excepted law be reasonable and cost-based and treat AI models/systems in the same manner as other models/systems) modifies both the “generally applicable law” and “primary purpose and effect” prongs of the rule of construction rather than just one or the other.
Other issues remain
As we’ve discussed previously, our best read of the text suggests that two additional issues remain unaddressed:
- Any state that takes any of the newly appropriated $500 million in BEAD funding runs the risk of having its entire share of the previously obligated $42.45 billion in existing BEAD funding clawed back for violations of the moratorium.
- Private companies and individuals will likely be able to enforce the moratorium through litigation.