Ten Highlights of the White House’s AI Action Plan
Today, the White House released its AI Action Plan, laying out the administration’s priorities for AI innovation, infrastructure, and adoption. Ultimately, the value of the Plan will depend on how it is operationalized via executive orders and the actions of executive branch agencies, but the Plan itself contains a number of promising policy recommendations. We’re particularly excited about:
- The section on federal government evaluations of national security risks in frontier models. This section correctly identifies the possibility that “the most powerful AI systems may pose novel national security risks in the near future,” potentially including risks from cyberattacks and risks related to the development of chemical, biological, radiological, nuclear, or explosive (CBRNE) weapons. Ensuring that the federal government has the personnel, expertise, and authorities needed to guard against these risks should be a bipartisan priority.
- The discussion of interpretability and control, which recognizes the importance of interpretability to the use of advanced AI systems in national security and defense applications. The Plan also recommends three policy actions for advancing the science of interpretability, each of which seems useful for frontier AI security in expectation.
- The overall focus on standard-setting by the Center for AI Standards and Innovation (CAISI, formerly known as the AI Safety Institute) and other government agencies, in partnership with industry, academia, and civil society organizations.
- The recommendation on building an AI evaluations ecosystem. The science of evaluating AI systems’ capabilities is still in its infancy, but the Plan identifies a few promising ways for CAISI and other government agencies to support the development of this critical field.
- The emphasis on physical and cybersecurity for frontier labs and bolstering critical infrastructure cybersecurity. As Leopold Aschenbrenner pointed out in “Situational Awareness,” AI labs are not currently equipped to protect their model weights and algorithmic secrets from being stolen by China or other geopolitical rivals of the U.S., and fixing this problem is a crucial national security imperative.
- The call to improve the government’s capacity for AI incident response. Advanced planning and capacity-building are crucial for ensuring that the government is prepared to respond in the event of an AI emergency. Incident response preparation is an effective way to increase resiliency without directly burdening innovation.
- The section on how the legal system should handle deceptive AI-generated “evidence.” Legal rules often lag behind technological development, and the guidance contemplated here could be highly useful to courts that might otherwise be unprepared to handle an influx of unprecedentedly convincing fake evidence.
- The recommendations for ramping up export control enforcement and plugging loopholes in existing semiconductor export controls. Compute governance—preventing geopolitical rivals from gaining access to the chips needed to train cutting-edge frontier AI models—continues to be an effective policy tool for maintaining the U.S.’s lead in the race to develop advanced AI systems before China.
- The suggested regulatory sandboxes, which could enable AI adoption and increase the AI governance capacity of sectoral regulatory agencies like the FDA and the SEC.
- The section on deregulation wisely rejects the maximalist position of the moratorium that was stripped from the recent reconciliation bill by a 99-1 Senate vote. Instead of proposing overbroad and premature preemption of virtually all state AI regulations, the Plan recommends that AI-related federal funding should not “be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.”
- At the moment, it’s hard to identify any significant source of “AI-related federal funding” to states, although this could change in the future. This being the case, it will likely be difficult for the federal government to offer states any significant inducement towards deregulation unless it first offers them new federal money. And disincentivizing truly “burdensome” state regulations that would interfere with the effectiveness of federal grants seems like a sensible alternative to broader forms of preemption.
- The Plan also seems to suggest that the FCC could preempt some state AI regulations under § 253 of the Communications Act of 1934. It remains to be seen whether and to what extent this kind of preemption is legally possible. At first glance, however, it seems unlikely that the FCC’s authority to regulate telecommunications services could legally be used for any especially broad preemption of state AI laws. Any broad FCC preemption under this authority would likely have to go through notice and comment procedures and might struggle to overcome legal challenges from affected states.