Commentary | 
December 2025

Legal Obstacles to Implementation of the AI Executive Order

Charlie Bullock

About a month ago, I published an initial analysis of a leaked draft of an AI-related executive order that was rumored to be forthcoming. For a few weeks thereafter, it looked as if the draft might not actually make it past the President’s desk, or as if the final version of the executive order might be substantially altered from the aggressive and controversial draft version. On December 11, 2025, however, President Trump signed an executive order that substantially resembled the leaked draft.

Because the executive order (EO) is virtually identical to the leaked draft in terms of its substance, my analysis of the legal issues raised by that draft remains applicable. But since I published that first analysis on November 20, LawAI has had a chance to conduct further research into some of the questions that I wasn’t able to definitively resolve in that first commentary. Additionally, intervening events have provided some important context for understanding what the consequences of the executive order will be for AI policy in the U.S. Accordingly, I’ve decided to publish this updated commentary, which incorporates most of the previous piece’s analysis as well as the results of subsequent research.

What Does the Executive Order Purport to Do?

As an initial matter, it’s important to understand what an executive order is and what legal effect executive orders have in the United States. An executive order is not a congressionally enacted statute or “law.” While Congress undoubtedly has the authority to preempt some state AI laws by passing legislation, the President generally cannot unilaterally preempt state laws by presidential fiat (nor does the EO purport to do so). What an executive order can do is to publicly announce the policy goals of the executive branch of the federal government, and announce directives from the President to executive branch officials and agencies. So, contrary to what some headlines seem to suggest, the EO does not, and could not, preempt any state AI laws. It does, however, instruct a number of actors within the executive branch to take various actions intended to make it easier to preempt state AI laws in the future, or to make it more difficult for states to enact or enforce AI laws that are inconsistent with the White House’s policy positions. 

It’s also worth noting that the EO’s title, “Ensuring a National Policy Framework for Artificial Intelligence,” should not be taken at face value. Because the idea of passing federal AI policy is considerably more popular than the idea of preventing states from enacting AI policy, the use of the term “federal framework” or some equivalent phrase as a euphemism for preemption of state AI laws has become something of a trend among preemption advocates in recent months, and the EO is no exception. While the EO does discuss the need for Congress to pass a national policy framework for AI, and while sections 6 and 8 do contemplate the creation of affirmative federal policies, the EO’s primary goal is clearly the elimination of undesirable state AI laws rather than the creation of federal policy. 

The EO discussed in this commentary, which is titled “Ensuring A National Policy Framework For Artificial Intelligence,” is relatively short, clocking in at just under 1400 words, and consists of nine sections. This commentary summarizes the EO’s content, discusses the differences between the final version and the draft that leaked in November, and then briefly analyzes a few of the most important legal issues raised by the EO. This commentary is not intended to be comprehensive, and LawAI may publish additional commentaries and/or updates as events progress and additional legal issues come to light.

The EO’s nine sections are:

  • Two prefatory policy statements, §§ 1–2, which consist of a four-paragraph “Purpose” section announcing the policy justifications for the order and a one-sentence “Policy” section that contains a high-level summary of the White House’s AI policy position.
    • The gist of the Purpose section is that AI is a promising technology and that AI innovation is crucial to U.S. “national and economic security and dominance across many domains,” but that a “patchwork” of excessive state regulations threatens to suffocate innovation. The Purpose section also indicates that Congress should legislate to create a “minimally burdensome national standard” for AI regulation, but that the executive branch intends to get a head start on getting rid of burdensome state laws in the meantime.
    • The Policy section reads: “It is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”
  • Section 3, which directs the U.S. Attorney General (AG) to establish an AI Litigation Task Force within the Department of Justice (DOJ), assigned to file lawsuits challenging state AI laws deemed by the AG to be unlawful. The EO suggests that the Task Force will challenge state laws that allegedly violate the dormant commerce clause and state laws that are allegedly preempted by existing federal regulations. The Task Force is also authorized to challenge state AI laws under any other legal basis that DOJ can come up with.
  • Section 4, which directs the Department of Commerce to publish an “evaluation” of state AI laws, including a list of “onerous” laws. Presumably this will include Colorado’s controversial AI law, SB 24-205, which is specifically called out in § 1 of the EO, along with other state laws not mentioned. This list is supposed to inform the efforts contemplated in other sections of the EO by identifying laws that should be challenged or otherwise attacked.
  • Section 5, which contains two subsections that direct agencies to withhold federal grant funding from states that enact or enforce AI laws contrary to the EO’s policy goals. Subsection (a) indicates that the Department of Commerce will attempt to withhold non-deployment BEAD funding “to the maximum extent allowed by Federal law” from states with AI laws on the § 4 “onerous” list. Subsection (b) indicates that all federal agencies will assess their discretionary grant programs and determine whether existing or future grants can be withheld from states with AI laws that are challenged under § 3 or included in the § 4 list.
  • Section 6, which instructs the Federal Communications Commission, in consultation with AI czar David Sacks, to start a process for determining whether to adopt a federal AI transparency standard that would preempt state AI transparency laws.
  • Section 7, which directs the FTC to issue guidance arguing that certain state AI laws (presumably including, but not necessarily limited to, Colorado’s AI Act and other “woke” / “algorithmic discrimination” laws) are preempted by the FTC Act’s prohibition on deceptive commercial practices
  • Section 8, which instructs AI czar David Sacks and Office of Science and Technology Policy (OSTP) director Michael Kratsios to prepare a “legislative recommendation” to be submitted to Congress. This recommendation is supposed to lay out a “uniform Federal policy framework for AI that preempts State AI laws that conflict with the policy set forth in this order.”
  • Section 9, which contains miscellaneous housekeeping provisions concerning how the EO should be interpreted and published.

How Does the Published Executive Order Differ from the Draft Version that Was Leaked in November?

As noted above, the EO is extremely substantively similar to the draft that leaked in November. There are, however, a number of sentence-level changes, most of which were presumably made for reasons of style and clarity. The published EO also includes a few changes that are arguably significant for signaling reasons—that is, because of what they seem to say about the White House’s plan for implementing the EO. 

Most notably, Section 1 (the discussion of the EO’s “Purpose”) has been toned down in a few different ways. The initial draft specifically criticized both Colorado’s controversial algorithmic discrimination law and California’s Transparency in Frontier Artificial Intelligence Act (also known as SB 53), and dismissively referred to the “purely speculative suspicion that AI might ‘pose significant catastrophic risk.’” The leaked draft also suggested that “sophisticated proponents of a fear-based regulatory capture strategy” were responsible for these laws. The published version still criticizes the Colorado law, but does not contain any reference to SB 53, catastrophic risk, or regulatory capture. In light of this revision, it’s possible that SB 53—which is, by most accounts, a light-touch, non-burdensome transparency law that merely requires developers to create safety protocols of the sort that every frontier developer already creates and publishes—will not be identified as an “onerous” state AI law and targeted pursuant to the EO’s substantive provisions. To be clear, I think it’s still quite likely that SB 53 and similar transparency laws like New York’s RAISE Act will be targeted, but the removal of the explicit reference reduces the likelihood of that from “virtually certain” to “merely probable.” 

This change seems like a win, albeit a minor one, for “AI safety” types and others who worry about future AI systems creating serious risks to national security and public safety. The AI Action Plan that the White House released in July seemed to take the prospect of such risks quite seriously, so the full-throated dismissal in the leaked draft would have been a significant change of course.

The published EO also throws a bone to child safety advocates, which may also be significant for signaling reasons. It was somewhat surprising that the leaked draft did not contain any reference to child safety, because child safety is an issue that voters and activists on both sides of the aisle care about a great deal. The political clout wielded by child safety advocates is such that Republican-led preemption efforts have typically included some kind of explicit carve-out or concession on the issue. For example, the final revision of the moratorium that ended up getting stripped out of the Big Beautiful Bill in late June attempted to carve out an exception for state laws relating to “child online safety,” and Dean Ball’s recent preemption proposal similarly attempts to placate child safety advocates by acknowledging the importance of the issue and adding transparency requirements specifically intended to protect children.

The published EO mentions children’s safety in § 1 as one of the issues that federal AI legislation should address. And § 8, the “legislative proposal” section, states that the proposal “shall not propose preempting otherwise lawful State AI laws relating to… child safety protections.” Much has been made of this carve-out in the online discourse, and it does seem important for signaling reasons. If the White House’s legislative proposal won’t target child safety laws, it seems reasonable to suggest that other White House efforts to eliminate certain state AI laws might steer clear of child safety laws as well. However, it’s worth noting that the exception in § 8 applies only to § 8, and not to the more important sections of the EO such as § 3 and § 5. 

This leaves open the possibility that the Litigation Task Force might sue states with AI-related child safety laws, or that federal agencies might withhold discretionary grant funds from such states. Some right-leaning commentators have suggested that this is not a realistic possibility, because federal agencies will use their discretion to avoid going after child safety laws regardless of whether the EO specifically requires them to do so. It should be noted, however, that the category of “child safety laws” is broad and poorly defined, and that many of the state laws that the White House most dislikes could be reframed as child safety laws or amended to focus on child safety. In other words, a blanket policy of leaving “child safety” laws alone may not be feasible, or may not be attractive to the White House.

As for § 8 itself, a legislative proposal is just that—a proposal. It has no legal effect unless it is enacted into law by Congress. Congress can simply not enact the proposal into law—and given how rare it is for federal legislation (even legislation supported by the President) to actually be enacted, this is by far the most likely outcome. The White House has already thrown its weight behind a number of preemption-related legislative proposals in the past, and so far none of these proposals have managed to make it through Congress. It’s possible that the legislative proposal contemplated in § 8 will fare better, but the odds are not good. In my opinion, therefore, the child safety exception in § 8 is significant mostly because of what it tells us about the administration’s policy preferences rather than because of anything that it actually does. 

The final potentially significant change relates to § 5(b), which contemplates withholding federal grant funding from states that regulate AI. In the leaked draft, that section directed federal agencies to review their discretionary grants to see if any could lawfully be withheld from states with AI laws designated as “onerous.” The published EO directs agencies to do the same, but directs them to do so “in consultation with” AI czar David Sacks. It remains to be seen whether this change will mean anything in practice—David Sacks is one man, and a very busy man at that, and may not have the staffing support that would realistically be needed to meaningfully review every agency’s response to the EO. But whatever that “in consultation with” ends up meaning in practice, it seems plausible to suggest that the change may at least marginally increase agencies’ willingness to withhold fund.

Issue 1: The Litigation Task Force 

The EO’s first substantive section, § 3, would instruct the U.S. Attorney General to “establish an AI Litigation Task Force” charged with bringing lawsuits in federal court to challenge allegedly unlawful state AI laws. The EO suggests that the Task Force will challenge state laws that allegedly violate the dormant commerce clause and state laws that are allegedly preempted by existing federal regulations. The Task Force is also authorized to challenge state AI laws under any other legal basis that the Department of Justice (DOJ) can identify.

Dormant commerce clause arguments

Presumably, the EO’s reference to the commerce clause refers to the dormant commerce clause argument laid out by Andreessen Horowitz in September 2025. This argument, which a number of commentators have raised in recent months, suggests that certain state AI laws violate the commerce clause of the U.S. Constitution because they impose excessive burdens on interstate commerce.

LawAI’s analysis indicates that this commerce clause argument, at least with respect to the state laws most commonly cited as potential preemption targets, is legally dubious and unlikely to succeed in court. We intend to publish a more thorough analysis of this issue in the coming weeks in addition to the overview included here. 

In 2023, the Supreme Court issued an important dormant commerce clause opinion in the case of National Pork Producers Council v. Ross. The thrust of the majority opinion in that case, authored by Justice Gorsuch, is that state laws generally do not violate the dormant commerce clause unless they involve purposeful discrimination against out-of-state economic interests in order to favor in-state economic interests. 

Even proponents of this dormant commerce clause argument typically acknowledge that the state AI laws they are concerned with generally do not discriminate against out-of-state economic interests. Therefore, they often ignore Ross, or cite the dissenting opinions while ignoring the majority. Their preferred precedent is Pike v. Bruce Church, Inc., a 1970 case in which the Supreme Court held that a state law with “only incidental” effects on interstate commerce does not violate the dormant commerce clause unless “the burden imposed on such commerce is clearly excessive in relation to the putative local benefits.” This standard opens the door for potential challenges to nondiscriminatory laws that arguably impose a “clearly excessive” burden on interstate commerce. 

The state regulation that was invalidated in Pike would have required cantaloupes grown in Arizona to be packed and processed in Arizona as well. The only state interest at stake was the “protect[ion] and enhance[ment] of [cantaloupe] growers within the state.” The Court in Pike specifically acknowledged that “[w]e are not, then, dealing here with state legislation in the field of safety where the propriety of local regulation has long been recognized.” 

Even under Pike, then, it’s hard to come up with a plausible argument for invalidating the state AI laws that preemption advocates are most concerned with. Andreessen Horowitz’s argument is that the state proposals in question, such as New York’s RAISE Act, “purport to have significant safety benefits for their residents,” but in fact “are unlikely” to provide substantial safety benefits. But this is, transparently, a policy judgment, and one with which the state legislature of New York evidently disagrees. As Justice Gorsuch observes in Ross, “policy choices like these usually belong to the people and their elected representatives. They are entitled to weigh the relevant ‘political and economic’ costs and benefits for themselves, and ‘try novel social and economic experiments’ if they wish.” New York voters overwhelmingly support the RAISE Act, as did an overwhelming majority of New York’s state legislature when the bill was put to a vote. In my opinion, it is unlikely that any federal court will presume to override those policy judgments and substitute its own.

That said, it is possible to imagine a state AI law that would violate the dormant commerce clause. For example, a law that placed burdensome requirements on out-of-state developers while exempting in-state developers, in order to grant an advantage to in-state AI companies, would likely be unconstitutional. Since I haven’t reviewed every state AI bill that has been or will be proposed, I can’t say for sure that none of them would violate the dormant commerce clause. It is entirely possible that the Task Force will succeed in invalidating one or more state laws via a dormant commerce clause challenge. It does seem relatively safe, however, to predict that the specific laws referred to in the executive order and the state frontier AI safety laws most commonly referenced in discussions of preemption would likely survive any dormant commerce clause challenges brought against them.

State laws preempted by existing federal regulations

Section 3 also specifically indicates that the AI Litigation Task Force will challenge state laws that “are preempted by existing Federal regulations.” It is possible for state laws to be preempted by federal regulations, and, as with the commerce clause issue discussed above, it’s possible that the Task Force will eventually succeed in invalidating some state laws by arguing that they are so preempted. 

In the absence of significant new federal AI regulation, however, it is doubtful whether many of the state laws the EO is intended to target will be vulnerable to this kind of legal challenge. Moreover, any state AI law that created significant compliance costs for companies and was plausibly preempted by existing federal regulations could be challenged by the affected companies, without the need for DOJ intervention. The fact that (to the best of my knowledge) no such lawsuit has yet been filed challenging the most notable state AI laws indicates that the new Task Force will likely be faced with slim pickings, at least until new federal regulations are enacted and/or state regulation of AI intensifies.

Alternative grounds

Section 3 also authorizes the Task Force to challenge state AI laws that are “otherwise unlawful” in the Attorney General’s judgment. The Department of Justice employs a great number of smart and creative lawyers, so it’s impossible to say for sure what theories they might come up with to challenge state AI laws. That said, preemption of state AI laws has been a hot topic for months now, and the best theories that have been publicly floated for preemption by executive action are the dormant commerce clause and Communications Act theories discussed above. This is, it seems fair to say, a bearish indicator, and I would be somewhat surprised if the Task Force managed to come up with a slam dunk legal argument for broad-based preemption that has hitherto been overlooked by everyone who’s considered this issue.

Issue 2: Restrictions on State Funding

Section 5 of the EO contains two subsections directing agencies to withhold federal grant funding from states that attempt to regulate AI. Subsection (a) indicates that Commerce will attempt to withhold non-deployment Broadband Equity Access and Deployment (BEAD) funding “to the maximum extent allowed by federal law” from states with AI laws listed pursuant to § 4 of the EO, which instructs the Department of Commerce to identify state AI laws that conflict with the policy directives laid out in § 1 of the EO. Subsection (b) instructs all federal agencies to assess their discretionary grant programs and determine whether existing or future grants can be withheld from states with AI laws that are challenged under § 3 or identified as undesirable pursuant to § 4. 

In my view, § 5 of the EO is the provision with the most potential to affect state AI legislation. While § 5 does not attempt to actually preempt state AI laws, the threat of losing federal grant funds could have the practical effect of incentivizing some states to abandon their AI-related legislative efforts. And, as Daniel Cochrane and Jack Fitzhenry pointed out during the reconciliation moratorium fight, “Smaller conservative states with limited budgets and large rural populations need [BEAD] funds. But wealthy progressive states like California and New York can afford to take a pass and just keep enforcing their tech laws.” While politicians in deep blue states will be politically incentivized to fight the Trump administration’s attempts to preempt overwhelmingly popular AI laws even if it means losing access to some federal funds, politicians in red states may instead be incentivized to avoid conflict with the administration. 

Section 5(a): Non-deployment BEAD funding

Section 5(a) of the EO is easier to analyze than § 5(b), because it clearly identifies the funds that are in jeopardy—non-deployment BEAD funding. The BEAD program is a $42.45 billion federal grant program established by Congress in 2021 for the purpose of facilitating access to reliable, high-speed broadband internet for communities throughout the U.S. A portion of the $42.45 billion total was allocated to each of 56 states and territories in June 2023 by the National Telecommunications and Information Administration (NTIA). In June 2025, the NTIA announced a restructuring of the BEAD program that eliminated many Biden-era requirements and rescinded NTIA approval for all “non-deployment” BEAD funding, i.e., BEAD funding that states intended to spend on uses other than actually building broadband infrastructure. The total amount of BEAD funding that will ultimately be classified as “non-deployment” is estimated to be more than $21 billion. 

BEAD funding was previously used as a carrot and stick for AI preemption in June 2025, as part of the effort to insert a moratorium or “temporary pause” on state AI regulation into the most recent reconciliation bill. There are two critical differences between the attempted use of BEAD funding in the reconciliation process and its use in the EO, however. First, the EO is, obviously, an executive order rather than a legislative enactment. This matters because agency actions that would be perfectly legitimate if authorized by statute can be illegal if undertaken without statutory authorization. And secondly, while the final drafts of the reconciliation moratorium would only have jeopardized BEAD funding belonging to states that chose to accept a portion of $500 million in additional BEAD funding that the reconciliation bill would have appropriated, the EO would jeopardize non-deployment BEAD funding belonging to any state that attempts to regulate AI in a manner deemed undesirable under the EO.

The multibillion-dollar question here is: can the administration legally withhold BEAD funding from states because those states enact or enforce laws regulating AI? Unsatisfyingly enough, the answer to this question for now seems to be “no one knows for sure.” Predicting the outcome of a future court case that hasn’t been filed yet is always difficult, and here it’s especially difficult because it’s not clear exactly how the NTIA will go choose to implement § 5(a) in light of the EO’s requirement to withhold funds only “to the maximum extent allowed by federal law.” That said, there is some reason to believe that states would have at least a decent chance of prevailing if they sued to prevent NTIA from withholding funds from AI-regulating states. 

The basic argument against what the EO asks NTIA to do is simply that Congress provided a formula for allocating BEAD program funds to states, and did not authorize NTIA to withhold those congressionally allocated funds from states in order to vindicate unrelated policy goals. The EO anticipates this argument and attempts to manufacture a connection between AI and broadband by suggesting that “a fragmented State regulatory landscape for AI threatens to undermine BEAD-funded deployments, the growth of AI applications reliant on high-speed networks, and BEAD’s mission of delivering universal, high-speed connectivity.” In my view, this is a hard argument to take seriously. It’s difficult to imagine any realistic scenario in which (for example) laws imposing transparency requirements on AI companies would have any significant effect on the ability of internet providers to build broadband infrastructure. Still, the important question is not whether NTIA has the statutory authority to withhold funds as the EO contemplates, but rather whether states will be able to actually do anything about it. 

The Trump administration’s Department of Transportation (DOT) recently attempted a maneuver similar to the one contemplated in the § 5 when, in response to an executive order directing agencies to “undertake any lawful actions to ensure that so-called ‘sanctuary’ jurisdictions… do not receive access to federal funds,” the DOT attempted to add conditions to all DOT grant agreements requiring grant recipients to cooperate in the enforcement of federal immigration law. Affected states promptly sued to challenge the addition of this grant condition and successfully secured a preliminary injunction prohibiting DOT from implementing or enforcing the conditions. In early November 2025, the federal District Court for the District of Rhode Island ruled that the challenged conditions were unlawful for three separate reasons: (1) imposing the conditions exceeded the DOT’s statutory authority under the laws establishing the relevant grant programs; (2) imposing the conditions was “arbitrary and capricious,” in violation of the Administrative Procedure Act; and (3) imposing the conditions violated the Spending Clause of the U.S. Constitution. It remains to be seen whether the district court’s ruling will be upheld by a federal appellate court and/or by the U.S. Supreme Court.

The lawsuit described above should give you some idea of what to expect from a lawsuit challenging NTIA withholding of BEAD funds. It’s likely that states would make both statutory and constitutional arguments; in fact, they might even make spending clause, APA, and ultra vires (i.e., exceeding statutory authority) arguments similar to the ones discussed above. However, there are important differences between the executive actions that gave rise to that DOT case and the actions contemplated by § 5(a). For one thing, 47 U.S.C. § 1702(o) exempts the NTIA’s BEAD-related decisions from the requirements of the APA, meaning that it will likely be harder for states to challenge NTIA actions as being arbitrary and capricious. For another, 47 U.S.C. § 1702(n) dictates that all lawsuits brought under BEAD’s authorizing statute to challenge NTIA’s BEAD decisions will be subject to a standard of review that heavily favors the government. Essentially, this standard of review says that the NTIA’s decisions can’t be overturned unless they’re the result of corruption, fraud, or misconduct. 

This overview isn’t the place to get too deep into the weeds on the question of whether and how states might be able to get around these statutory hurdles. Suffice it to say that there are plausible arguments to be made on both sides of the debate. For example, courts sometimes hold that a lawsuit arguing that an agency’s actions are in excess of its statutory authority do not arise “under” the statute in question and are therefore not subject to the statute’s standard of review (although this is a narrow exception to the usual rule).

Suppose that, in the future, the Department of Commerce decides to withhold non-deployment BEAD funding from states with AI laws deemed undesirable under the EO. States could challenge this decision in court and ask the court to order NTIA to release the previously allocated non-deployment funds, arguing that the withholding of funds exceeded NTIA’s authority under the statute authorizing BEAD and violated the Spending Clause. Each of these arguments seems at least somewhat plausible, on an initial analysis. Nothing in the statute authorizing BEAD appears to give the federal government unlimited discretion to withhold BEAD funds to vindicate policy goals that have little or nothing to do with access to broadband, and the course of action proposed in the EO is, arguably, impermissibly coercive in violation of the Spending Clause.

AI regulation is a less politically divisive issue than immigration enforcement, and a cynical observer might assume that this would give states in this hypothetical AI case a better chance on appeal than the states in the DOT immigration conditions case discussed above. However, the statutory hurdles discussed above may make it harder for states to prevail than it was in the DOT conditions case. It should also be noted that, regardless of whether or not states could eventually prevail in a hypothetical lawsuit, the prospect of having BEAD funding denied or delayed, perhaps for years, could be enough to discourage some states from enacting AI legislation of a type disfavored by the Department of Commerce under the EO.

Section 5(b): Other discretionary agency funding

In addition to withholding non-deployment BEAD funding, the EO would instruct agencies throughout the executive branch to assess their discretionary grant programs and determine whether discretionary grants can legally be withheld from states that have AI laws that “conflict[] with the policy of this order.” 

The legality of this contemplated course of action, and its likelihood of being upheld in court, is even more difficult to conclusively determine ex ante than the legality and prospects of the BEAD withholding discussed above. The federal government distributes about a trillion dollars a year in grants to state and local governments, and more than a quarter of that money is in the form of discretionary grants (as opposed to grants from mandatory programs such as Medicaid). That’s a lot of money, and it’s broken up into a lot of different discretionary grants. It seems safe to predict that most discretionary grants will not be subject to withholding, since the one thing that all potential candidates will have in common is that Congress did not anticipate that they would be withheld in order to prevent state AI regulation. But depending on the amount of discretion Congress conferred on the agency in question, it may be that some grants can be withheld for almost any reason or for no reason at all. There may also be some grants that legitimately relate to the tech deregulation policy goals the administration is pursuing here. 

It’s likely that many of the arguments against withholding grant money from AI-regulating states will be the same from one grant to another—as discussed above in the context of § 5(a), states will likely argue that withholding grant funds to vindicate unrelated policy goals violates the Spending Clause, exceeds the agency’s statutory authority, and violates the Administrative Procedure Act. These arguments will be stronger with respect to some grants and weaker with respect to others, depending on factors such as the language of the authorizing statute and the purpose for which the grant was to be awarded. At this point, therefore, there’s no way to know for sure how much money the federal government will attempt to withhold and how much (if any) it will actually succeed in withholding. Nor is it clear which states will resort to litigation and which the administration will succeed in pressuring into giving up their AI regulations without a fight. Unlike many other provisions of the EO, § 5(b) does not contain a deadline by which agencies must complete their review, so it’s possible that we won’t have a fuller picture of which grants will be in danger for many months. 

Issue 3: Federal Reporting and Disclosure Standard

Section 6 of the EO instructs the FCC, in consultation with AI czar David Sacks, to “initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” It’s likely that the “conflicting state laws” referred to include state AI transparency laws such as California’s SB 53 and New York’s RAISE Act. It’s not clear from the language of the EO what legal authority this “Federal Reporting and Disclosure Standard” would be promulgated under. Under the Biden administration, the Department of Commerce’s Bureau of Industry and Security (BIS) controversially attempted to impose reporting requirements on frontier model developers under the information-gathering authority provided by § 705 of the Defense Production Act—but § 705 has historically been used by BIS rather than the FCC, and I am not aware of any comparable authority that would authorize the FCC to implement a mandatory “federal reporting and disclosure standard” for AI models. 

Generally, regulatory preemption can only occur when Congress has granted an executive-branch agency authority to promulgate regulations and preempt state laws inconsistent with those regulations. This authority can be granted expressly or by implication, but the FCC has never before asserted that it possesses any significant regulatory authority (express or otherwise) over any aspect of AI development. It’s possible that the FCC is relying on a creative interpretation of its authority under the Communications Act—after the AI Action Plan discussed the possibility of FCC preemption, FCC Chairman Brendan Carr indicated that the FCC was “taking a look” at whether the Communications Act grants the FCC authority to regulate AI and preempt onerous state laws. However, commentators who have researched this issue and experts on the FCC’s legal authorities almost universally agree that “[n]othing in the Communications Act confers FCC authority to regulate AI.” 

The fundamental obstacle to FCC preemption of state AI laws is that the Communications Act authorizes the FCC to regulate telecommunications services, and AI is not a telecommunications service. In the past, the FCC has sometimes suggested expansive interpretations of the Communications Act in order to claim more regulatory territory for itself, but claiming broad regulatory authority over AI would be significantly more ambitious than these (frequently unsuccessful) prior attempts. Moreover, this kind of creative reinterpretation of old statutes to create new agency authorities is much harder to get past a court today than it would have been even ten years ago, because of Supreme Court decisions eliminating Chevron deference and establishing the major questions doctrine. In his comprehensive policy paper on FCC preemption of state AI laws, Lawrence J. Spiwak (a staunch supporter of preemption) analyzes the relevant precedents and concludes that “given the plain language of the Communications Act as well as the present state of the caselaw, it is highly unlikely the FCC will succeed in [AI preemption] efforts” and that “trying to contort the Communications Act to preempt the growing patchwork of disparate state AI laws is a Quixotic exercise in futility.” Harold Feld of Public Knowledge essentially agrees with this assessment in his piece on the same topic.

Issue 4: Preemption of state laws for “deceptive practices” under the FTC Act

Section 7 of the EO directs the Federal Trade Commission (FTC) to issue a policy statement arguing that certain state AI laws are preempted by the FTC Act’s prohibition on deceptive commercial practices. Presumably, the laws which the EO intends for this guidance to target include Colorado’s AI Act, which the EO’s Purpose section accuses of “forc[ing] AI models to produce false results in order to avoid a ‘differential treatment or impact’” on protected groups, and other similar “algorithmic discrimination” laws. A policy statement on its own generally cannot preempt state laws, but it seems likely that the policy statement that the EO instructs the FTC to create would be relied upon in subsequent preemption-related regulatory efforts and/or by litigants seeking to prevent enforcement of the allegedly preempted laws in court. 

While the Trump administration has previously expressed disapproval of “woke” AI development practices, for example in the recent executive order on “Preventing Woke AI in the Federal Government,” this argument that the FTC Act’s prohibition on UDAP (unfair or deceptive acts or practices in or affecting commerce) preempts state algorithmic discrimination laws is, as far as I am aware, new. During the Biden administration, Lina Khan’s FTC published guidance containing an arguably similar assertion: that the “sale or use of—for example—racially biased algorithms” would be an unfair or deceptive practice under the FTC Act. Khan’s FTC did not, however, attempt to use this aggressive interpretation of the FTC Act as a basis for FTC preemption of any state laws. In fact, as far as I can tell, the FTC has never used the FTC Act’s prohibition on deceptive acts or practices to preempt state civil rights or consumer protection laws, no matter how misguided, meaning that the approach contemplated by the EO appears to be totally unprecedented

Colorado’s AI law, SB 24-205, has been widely criticized, including by Governor Jared Polis (who signed the act into law) and other prominent Colorado politicians. In fact, the law has proven so problematic for Colorado that Governor Polis, a Democrat, was willing to cross party lines in order to support broad-based preemption of state AI laws for the sake of getting rid of Colorado’s. Therefore, an attempt by the Trump administration to preempt Colorado’s law (or portions thereof) might meet with relatively little opposition from within Colorado. It’s not clear who, if anyone, would have standing to challenge FTC preemption of Colorado’s law if Colorado’s attorney general refused to do so. But Colorado is not the only state with a law prohibiting algorithmic discrimination, and presumably the guidance the EO instructs the FTC to produce would inform attempts to preempt other “woke” state AI laws as well as Colorado’s. 

If the matter did go to court, however, it seems likely that states would prevail. As bad as Colorado’s law may be (and, personally, I think it’s a pretty bad law) it’s very difficult to plausibly argue that it, or any similar state algorithmic discrimination law, requires any “deceptive act or practice affecting commerce.” The Colorado law requires developers and deployers of certain AI systems to use “reasonable care” to protect consumers from “algorithmic discrimination.” It also imposes a headache-inducing laundry list of documentation and record-keeping requirements on developers and deployers, which mostly relate to documenting efforts to avoid algorithmic discrimination. But, crucially, none of the law’s requirements appear to dictate that any AI output has to be untruthful—and regardless, creating an untruthful output need not be a “deceptive act or practice” under the FTC Act if the company provides consumers with enough information to ensure that they will not be deceived by the untruthful output.

“Algorithmic discrimination” is defined in the Colorado law to mean “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived [list of protected characteristics].” Note that only “unlawful” discrimination qualifies. FTC precedents establish that a deceptive act or practice occurs when there is a material representation, omission, or practice that is likely to mislead a consumer acting reasonably under the circumstances. The EO’s language seems to ask the FTC to argue that the prohibition on “differential impacts” will in practice require untruthful outputs because it prohibits companies from acknowledging the reality of group differences. But since only “unlawful” differential impacts are prohibited by the Colorado law, the only circumstance in which the Colorado law could be interpreted to require untruthful outputs is if some other valid and existing law already required such an output. And, again, even a requirement that did in practice encourage the creation of untruthful outputs would not necessarily result in “deception,” especially given the extensive disclosure requirements that the Colorado law includes.

Share
Legal Obstacles to Implementation of the AI Executive Order
Charlie Bullock
Legal Obstacles to Implementation of the AI Executive Order
Charlie Bullock