Blog Post | 
October 2024

Commerce just proposed the most significant federal AI regulation to date – and no one noticed

Charlie Bullock

A little more than a month ago, the Bureau of Industry and Security (“BIS”) proposed a rule that, if implemented, might just be the most significant U.S. AI regulation to date. The proposed rule has received relatively scant media attention as compared to more ambitious AI governance measures like California’s SB 1047 or the EU AI Act, to no one’s surprise—regulation is rarely an especially sexy topic, and the proposed rule is a dry, common-sense, procedural measure that doesn’t require labs to do much of anything besides send an e-mail or two to a government agency once every few months. But the proposed rule would allow BIS to collect a lot of important information about the most advanced AI models, and it’s a familiar fact of modern life that complex systems like companies and governments and large language models thrive on a diet of information. 

This being the case, anyone who’s interested in what the U.S. government’s approach to frontier AI regulation is likely to look like would probably be well-served by a bit of context about the rule and its significance. If that’s you, then read on.

What does the proposed rule do?

Essentially, the proposed rule would allow BIS to collect information on a regular basis about the most advanced AI models, which the proposed rule calls “dual-use foundation models.”1 The rule provides that any U.S. company that plans to conduct a sufficiently large AI model training run2 within the next six months must report that fact to BIS on a quarterly basis (i.e., once every three months, by specified reporting deadlines). Companies that plan to build or acquire sufficiently large computing clusters for AI training are similarly required to notify BIS. 

Once a company has notified BIS of qualifying plans or activities, the proposed rule states that BIS will send the company a set of questions, which must be answered within 30 days. BIS can also send companies additional “clarification questions” after receiving the initial answers, and these clarification questions must be answered within 7 days. 

The proposed rule includes a few broad categories of information that BIS will certainly collect. For instance, BIS is required under the rule to ask companies to report the results of any red-teaming safety exercises conducted and the physical and cybersecurity measures taken to protect model weights. Importantly, however, the proposed rule would not limit BIS to asking these questions—instead, it provides that BIS questions “may not be limited to” the listed topics. In other words, the proposed rule would provide BIS with extremely broad and flexible information-gathering capabilities.  

Why does the proposed rule matter?

The NPRM doesn’t come as a surprise—observers have been expecting something like it for a while, because President Biden ordered the Department of Commerce to implement reporting requirements for “dual-use foundation models” in § 4.2(a) of Executive Order 14110. Also, BIS previously sent out an initial one-off survey to selected AI companies in January 2024, collecting information similar to the information that will be collected on a more regular basis under the new rule. 

But while the new proposed rule isn’t unexpected, it is significant. AI governance researchers have emphasized the importance of reporting requirements, writing of a “growing consensus among experts in AI safety and governance that reporting safety information to trusted actors in government and industry is key” for responding to “emerging risks presented by frontier AI systems.” And most of the more ambitious regulatory frameworks for frontier AI systems that have been proposed or theorized would require the government to collect and process safety-relevant information. Doing this effectively—figuring out what information needs to be collected and what the collected information means—will require institutional knowledge and experience, and collecting safety information under the proposed rule will allow BIS to cultivate that knowledge and experience internally. In short, the proposed rule is an important first step in the regulation of frontier models. 

Labs already voluntarily share some safety information with the government, but these voluntary commitments have been criticized as “vague, sensible-sounding pledge[s] with lots of wiggle room,” and are not enforceable. In short, voluntary commitments obligate companies only to share whatever information they want to share, whenever they want to share it. The proposed rule, on the other hand, would be legally enforceable, with potential civil and criminal penalties for noncompliance, and would allow BIS to choose what information to collect. 

Pushback and controversy

Like other recent attempts to regulate frontier AI developers, the proposed rule has attracted some amount of controversy. However, the recently published public comments on the rule seem to indicate that the rule is unlikely to be challenged in court—and that, unless the next presidential administration decides to change course and scrap the proposed rule, reporting requirements for dual-use foundation models are here to stay.

The proposed rule and the Defense Production Act

As an executive-branch agency, BIS typically only has the legal authority to issue regulations if some law passed by Congress authorizes the kind of regulation contemplated. According to BIS, congressional authority for the proposed rule comes from § 705 of the Defense Production Act (“DPA”).

The DPA is a law that authorizes the President to take a broad range of actions in service of “the national defense.” The DPA was initially enacted during the Korean War and used solely for purposes related to defense industry production. Since then, Congress has renewed the DPA a number of times and has significantly expanded the statute’s definition of “national defense” to include topics such as “critical infrastructure protection and restoration,” “homeland security,” “energy production,” and “space.” 

Section 705 of the DPA authorizes the President to pass regulations and conduct industry surveys to “obtain such information… as may be necessary or appropriate, in his discretion, to the enforcement or administration of [the DPA].” While § 705 is very broadly worded, and on its face appears to give the President a great deal of discretionary authority to collect all kinds of information, it has historically been used primarily to authorize one-off “industrial base assessment” surveys of defense-relevant industries. These assessments have typically been time-bounded efforts to analyze the state of a specified industry that result in long “assessment” documents. Interestingly enough, BIS has actually conducted an assessment of the artificial intelligence industry once before—in 1994.3

Unlike past industrial base assessments, the proposed rule would allow the federal government to collect information from industry actors on an ongoing basis, indefinitely. This means that the kind of information BIS requests and the purposes it uses that information for may change over time in response to advances in AI capabilities and in efforts to understand and evaluate AI systems. And unlike past assessment surveys, the rule’s purpose is not simply to aid in the preparation of a single snapshot assessment of the industry. Instead, BIS intends to use the information it collects to “ensure that the U.S. Government has the most accurate, up-to-date information when making policy decisions” about AI and the national defense. 

Legal and policy objections to reporting requirements under Executive Order 14110

After Executive Order 14110 was issued in October 2023, one of the most common criticisms of the more-than-100-page order was that its reliance on the DPA to justify reporting requirements was unlawful. This criticism was repeated by a number of prominent Republican elected officials in the months following the executive order’s publication in October 2023, and the prospect of a lawsuit challenging the legality of reporting requirements under the executive order was widely discussed. But while these criticisms were based on legitimate and understandable concerns about the separation of powers and the scope of executive-branch authority, they were not legally sound. Ultimately, any lawsuit challenging the proposed rule would likely need to be filed by the leading AI labs who are subject to the rule’s requirements, and none of those labs seem inclined to raise the kind of fundamental objections to the rule’s legality that early reactions to the executive order contemplated. 

The basic idea behind the criticisms of the executive order was that it used the DPA in a novel way, to do something not obviously related to the industrial production of military materiel. To some skeptics of the Biden administration, or observers generally concerned about the concentration of political power in the executive branch, the executive order looked like an attempt to use emergency wartime powers in peacetime to increase the government’s control over private industry. The public comment4 on BIS’s proposed rule by the Americans for Prosperity Foundation (“AFP”), a libertarian advocacy group, is a representative articulation of this perspective. AFP argues that the DPA is an “emergency” statute that should not be used in non-emergencies for purposes not directly related to defense industry production.

This kind of concern about peacetime abuses of DPA authority is not new.   President George H.W. Bush, after signing a bill reauthorizing the DPA in 1992, remarked that using § 705 during peacetime to collect industrial base data from American companies would “intrude inappropriately into the lives of Americans who own and work in the Nation’s businesses.” And former federal judge Jamie Baker, in an excellent paper from 2021 on the DPA’s potential as an AI governance tool, predicted that the use of § 705 to collect information about AI to collect information from “private companies engaged in AI research” would meet with “challenge and controversy.”

Still, to quote from Judge Baker’s piece again, “Section 705 is clearly written and the authority it presents is strong.” Nothing in the DPA indicates that industrial base surveys under § 705 cannot be continuously ongoing, or that the DPA generally can only be used for encouraging increased defense industry production. It’s true that § 705, and related regulations,  both focus on gathering information about the capacity of the U.S. industrial base to support “the national defense”—but recall that the DPA defines the term “national defense” very broadly, to include a wide variety of non-military considerations such as critical infrastructure protection. Moreover, the DPA generally has been used for purposes not directly related to defense industry production by Presidents from both parties for decades. For example, DPA authorities have been used to supply California with natural gas during the 2000-2001 energy crisis and to block corporate acquisitions that would have given Chinese companies ownership interests in U.S. semiconductor companies. In short, while critics of the proposed rule can reasonably argue that using the DPA in novel ways to collect information from private AI companies is bad policy, and politically undesirable, it’s much harder to make a reasonable argument against the legality of the proposed rule.

Also, government access to up-to-date information about frontier models may be more important to national security, and even to military preparedness specifically, than the rule’s critics anticipate. A significant portion of the Notice of Proposed Rulemaking in which BIS introduced the proposed rule is devoted to justifying the importance of the rule to “the national defense” and “the defense industrial base.” According to BIS, integrating dual-use foundation models into “military equipment, signal intelligence devices, and cybersecurity software” could soon become important to the national defense. Therefore, BIS claims, the government needs access to information from developers both to determine whether government action to stimulate further dual-use foundation model development is needed and “to ensure that dual-use foundation models operate in a safe and reliable manner.”

In any event, any lawsuit challenging the proposed rule would probably have to be brought by one of the labs subject to the reporting requirements.5 A few leading AI labs have submitted public comments on the rule, but none expressed any objection to the basic concept of an ongoing system of mandatory reporting requirements for dual-use foundation model developers. Anthropic’s comment only requests that the reporting requirements should be semiannual rather than quarterly, that labs should have more time to respond to questions, and that BIS should tweak some of the definitions in the proposed rule and take steps to ensure that the sensitive information contained in labs’ responses is handled securely. OpenAI’s comment goes a bit further, asking (among other requests) that BIS limit itself to collecting only “standardized”  information relevant to national security concerns and to using information collected “for the sole purpose to ensure [sic] and verify the continuous availability of safe, reliable, and effective AI.” But neither those labs nor any of their competitors has voiced any fundamental objection to the basic idea of mandatory reporting requirements that allow the government to collect safety information about dual-use foundation models. This is unsurprising given that these and other leading AI companies have already committed to voluntarily sharing similar information with the US and other governments. In other words, while it’s too soon to be certain, it looks like the reporting requirements are unlikely to be challenged in court for the time being.

Conclusion

“Information,” according to LawAI affiliate Noam Kolt and his distinguished co-authors, “is the lifeblood of good governance.” The field of AI governance is still in its infancy, and at times it seems like there’s near-universal agreement on the need for the federal government to do something and near-universal disagreement about what exactly that something should be. Establishing a flexible system for gathering information about the most capable models, and building up the government’s capacity for collecting and processing that information in a secure and intelligent way, seems like a good first step. The regulated parties, who have voluntarily committed to sharing certain information with the government and have largely chosen not to object to the idea of ongoing information-gathering by BIS, seem to agree. In an ideal world, Congress would pass a law explicitly authorizing such a system; maybe someday it will. In the meantime, it seems likely that BIS will implement some amended version of its proposed rule in the near future, and that the result will, for better or worse, be the most significant federal AI regulation to date.


Last edited on: October 30, 2024

Share
Cite

Charlie Bullock, Commerce Just Proposed the Most Significant Federal AI Regulation to Date – and No One Noticed, Inst. for L. & A.I. (Oct. 2024), https://law-ai.org/commerce-federal-ai-regulation/

Commerce just proposed the most significant federal AI regulation to date – and no one noticed
Charlie Bullock
Commerce just proposed the most significant federal AI regulation to date – and no one noticed
Charlie Bullock