The National Security Memo on AI: What to Expect in Trump 2.0
Any opinions expressed in this post are those of the author and do not reflect the views of the Institute for Law & AI or the U.S. Department of Defense.
On October 24, 2024, President Biden’s National Security Advisor Jake Sullivan laid out the U.S. government’s “first-ever strategy for harnessing the power and managing the risks of AI to advance [U.S.] national security.”1 The National Security Memorandum on AI (NSM) was initially seen as a major development in U.S. national security policy, but, following former President Donald Trump’s victory in the 2024 election, it is unclear what significance the NSM retains. If he is so inclined, President Trump can rescind the NSM on his first day in office, as he has promised to do with President Biden’s 2023 AI Executive Order (EO). But national security has traditionally been a policy space with a significant degree of continuity between administrations, and at least some of the policy stances embodied in the NSM seem consistent with the first Trump administration’s approach to issues at the intersection of AI and national security.
So, does the NSM still matter? Will the incoming administration repeal it completely, or merely amend certain provisions while leaving others in place? And what, if anything, might any repealed provisions be replaced with? While other authors have already provided comprehensive analyses of the NSM’s provisions and its accompanying framework, none have focused their assessments on how the documents will fare under the incoming administration. This blog post attempts to fill that gap by analyzing how President Trump and his key advisors may change or continue some of the NSM’s most significant provisions. In summary:
- Bias and Discrimination: The Trump Administration is likely to repeal NSM provisions that focus on issues of bias and discrimination, including its recognition of bias and discrimination as one of nine core AI risk categories.
- “Safe, Secure, and Trustworthy” AI: The NSM contains a number of provisions directing agencies to establish risk management practices, create benchmarks and standards, conduct testing, and release guidance for evaluating the safety, security, and trustworthiness of advanced AI systems. These provisions do not impose mandatory obligations on private companies, and may survive for that reason. However, conservatives may object to the NSM’s focus on mitigating risks rather than encouraging adoption through deregulation.
- Responding to Foreign Threats, Particularly from China: The incoming administration appears likely to continue the NSM’s initiatives aimed at slowing AI progress by China and other U.S. adversaries—including directing the Intelligence Community to focus on threats to the U.S. AI ecosystem and strengthening inbound investment screening—though potentially with a revised approach more consistent with President Trump’s explicit strategic focus on China during his first term.
- Infrastructure: The Trump administration seems poised to expand upon or at least continue the NSM’s efforts to develop the energy production capabilities necessary for expected future AI development and deployment needs, to strengthen domestic chip production, and to make AI resources accessible to researchers without the funding of private companies while increasing the government’s efficiency in using its own AI resources.
- Talent and Immigration: President Trump appears likely to continue the NSM’s initiatives aimed at better recruiting and retaining AI talent across the national security enterprise, but whether he will accept the NSM’s provisions relating to high-skilled immigration is uncertain.
Background
Created in response to a directive in President Biden’s AI EO,2 the National Security Memorandum on AI (NSM) was a major national security policy priority for the Biden administration. Few technologies over the last 75 years have received similar top-level, interagency attention; the Biden administration officials who designed the NSM have said that they took inspiration from historical efforts to compete against the Soviets in nuclear and space technologies. The NSM is detailed, specific, and lengthy, coming in at more than twice the length of any other national security memorandum issued by the Biden administration other than the National Security Memorandum on Critical Infrastructure Security and Resilience (NSM-22).
Relative to some of the Biden administration’s other AI policy documents, the NSM more narrowly focuses on the strategic consequences of AI for U.S. national security. It identifies AI as an “era-defining technology”3 and paints a picture of the United States in a great power competition that, at its core, is a struggle for technological supremacy.4 The NSM argues that, if the United States does not act now using a coordinated, responsible, and whole-of-society approach to take advantage of AI advances, it “risks losing ground to strategic competitors”5 and that this lost technological edge “could threaten U.S. national security, bolster authoritarianism worldwide, undermine democratic institutions and processes, facilitate human rights abuses, and weaken the rules-based international order.”6
Where previous Biden administration AI documents either took a non-sector-specific approach,7 excluded non-national security systems,8 focused guidance narrowly on autonomous and semi-autonomous weapon systems,9 or provided high-level principles rather than concrete direction,10 the NSM requires follow-through by all government agencies across the national security enterprise and helps enable that follow-through with concrete implementation guidance. Specifically, the NSM includes more than 80 compulsory assignments11 to relevant agencies in support of efforts to promote and secure U.S. leadership in AI (focusing particularly on frontier AI models12), harness AI to achieve U.S. national security goals, and engage with other countries and multilateral organizations to influence the course of AI development efforts around the world in a direction consistent with U.S. values and interests.13 Those assignments to agencies seek to accelerate domestic AI development while slowing the development of U.S. adversaries’ capabilities and managing technological risks, including “AI safety, security, and trustworthiness.”14 Inside the national security enterprise, the NSM seeks to enable effective and responsible AI use while ensuring agencies can manage the technology’s risks.
To the same ends, the NSM provides and requires agencies to follow a separate15 governance and risk management framework for “AI used as a component of a National Security System.”16 The framework sets concrete boundaries for national security agencies’ responsible adoption of AI systems in several ways.17 First, it delineates AI use restrictions and minimum risk management safeguards for specific use cases, ensuring agencies know what they can and cannot legally use AI for and when they must take more thorough risk reduction measures before a given stage of the AI lifecycle. The framework also requires agencies to catalog and monitor their AI use, facilitating awareness and accountability for all AI uses up the chain of command. Lastly, the framework requires agencies to establish standardized training and accountability requirements and guidelines to ensure their personnel’s responsible use and development of AI.
Logistics of a Repeal
Presidents are generally free to revoke, replace, or modify the presidential memoranda issued by their predecessors as they choose, without permission from Congress. To repeal the NSM, President Trump could issue a new memorandum rescinding the entire NSM (and the accompanying framework) or repealing certain provisions while retaining others. A new Executive Order, potentially with the broader purpose of repealing the Biden AI EO, could serve the same function. Both of these options would typically include a policy review led by the National Security Council to assess the status quo and recommend updates, though each presidential administration has revised the exact process to fit their needs. If the NSM does not end up being a top priority, President Trump could also informally direct18 national security agency heads to stop or change their implementation of some of the NSM’s provisions before he issues a formal policy document.
Bias and Discrimination
The first NSM provisions on the chopping block will, in all likelihood, be those that focus on bias and discrimination. President Trump and conservatives across the board have vowed to “stop woke and weaponized government” and generally view many of the Biden administration’s policies in this arena as harming U.S. competitiveness and growth, stifling free speech, and negatively impacting U.S. homeland and national security. In the AI context, the 2024 GOP Platform promised to repeal the Biden EO, stating that it “hinders AI Innovation,… imposes Radical Leftwing ideas on the development of this technology,” and restricts freedom of speech.
While not as focused on potentially controversial social issues as some other Biden administration AI policy documents,19 the NSM does contain several provisions to which the Trump administration will likely object. Specifically, the incoming administration seems poised to cut the NSM’s recognition of “discrimination and bias” as one of nine core AI risk categories20 that agency heads must “monitor, assess, and mitigate” in their agencies’ development and use of AI.21 Additionally, the incoming administration may repeal or revise M-24-10—a counterpart to the NSM’s framework that addresses AI risks outside of the national security context—effectively preventing the NSM’s framework from incorporating M-24-10’s various “rights-impacting” use cases.22 These provisions are easily severable from the current NSM and its framework.
“Safe, Secure, and Trustworthy” AI
One primary focus of the NSM is facilitating the “responsible” adoption of AI by promoting the “safety, security, and trustworthiness” of AI systems through risk management practices, standard-setting, and safety evaluations. In many ways, the NSM’s approach in these sections is consistent with aspects of the first Trump administration’s AI policy, but there is growing conservative and industry support for a deregulatory approach to speed up AI adoption.
The first Trump administration kickstarted federal government efforts to accelerate AI development and adoption with two AI-related executive orders issued in 2019 and 2020. At the time, the administration saw trust and safety as important factors for facilitating adoption of AI technology; the 2019 EO noted that “safety and security concerns” were “barriers to, or requirements associated with” widespread AI adoption, emphasized the need to “foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values,” and required the National Institute of Standards and Technology (NIST) to develop a plan for the federal government to assist in the development of technical standards “in support of reliable, robust, and trustworthy” AI systems.23 The 2020 EO sought to “promot[e] the use of trustworthy AI in the federal government” in non-national security contexts and, in service of this goal, articulated principles for the use of AI by government agencies. According to the 2020 EO, agency use of AI systems should be “safe, secure, and resilient,” “transparent,” “accountable,” and “responsible and traceable.”24
The Biden administration continued along a similar course,25 focusing on the development of soft law mechanisms to mitigate AI risks, including voluntary technical standards, frameworks, and agreements. Echoing the 2019 Trump EO, the Biden NSM argues that standards for safety, security, and trustworthiness will speed up adoption “thanks to [the] increased certainty, confidence, and compatibility” they bring.
But 2020 was a lifetime ago in terms of AI policy. Ultimately, the real question is not whether the second Trump administration thinks that safety, security, and trustworthiness are relevant, but rather whether the NSM provisions relating to trustworthy AI are viewed as, at the margins, facilitating adoption or hindering innovation. While there is certainly overlap between the two administration’s views, some conservatives have objected to the Biden administration’s AI policy outside of the national security context on the grounds that it focused on safety, security, and trustworthiness primarily for the sake of preventing various harms instead of as a means to encourage and facilitate AI adoption.26 Others have expressed skepticism regarding discussions of “trust and safety” on the grounds that large tech companies might use safety concerns to stymie competition, ultimately leading to reduced innovation and harm to consumers. In particular, the mandatory reporting requirements placed on AI companies by President Biden’s 2023 EO faced conservative opposition; the 2024 GOP platform asserts that the EO will “hinder[] AI innovation” and promises to overturn it.
Concretely, the NSM requires agencies in the national security enterprise to use its accompanying risk management framework as they implement AI systems; to conduct certain evaluations and testing of AI systems; to monitor, assess, and mitigate AI-related risks; to issue and regularly update agency-specific AI governance and risk management guidance; and to appoint Chief AI Officers and establish AI Governance Boards.27 The NSM intends these Officers and Boards to ensure accountability, oversight, and transparency in the implementation of the NSM’s framework.28 The NSM also designates NIST’s AI Safety Institute (AISI) to “serve as the primary United States government point of contact with private sector AI developers to facilitate voluntary pre- and post-public-deployment testing for safety, security, and trustworthiness,” conduct voluntary preliminary pre-deployment testing on at least two frontier AI models, create benchmarks for assessing AI system capabilities, and issue guidance on testing, evaluation, and risk management.29 Various other agencies with specific expertise are required to provide “classified sector-specific evaluations” of advanced AI models for cyber, nuclear, radiological, biological, and chemical risks.30
Unlike the 2023 Biden EO, which invoked the Defense Production Act to impose its mandatory reporting requirements on private companies, the NSM’s provisions on safe, secure, and trustworthy AI impose no mandatory obligations on private companies. This, in addition to the NSM’s national security focus, might induce the Trump administration to leave most of these provisions in effect.31 However, not all members of the incoming administration may view such a focus on risk management, standards, testing, and evaluation as the best path to AI adoption across the national security enterprise. If arguments for a deregulatory approach toward AI adoption win the day, these NSM provisions and possibly the entire memorandum could face a full repeal. Regardless of the exact approach, the incoming administration seems likely to keep AI innovation as its north star, taking an affirmative approach focused on the benefits enabled by AI and on safety, security, and trustworthiness instrumentally to the degree the administration judges necessary to enable U.S. AI leadership.
Responding to Foreign Threats, Particularly from China
President Trump seems likely to maintain or expand the NSM directives aimed at impeding the AI development efforts of China and other U.S. adversaries. Over the last decade, Washington has seen bipartisan consensus behind efforts to respond to economic gray zone tactics32 used by U.S. adversaries, particularly China, to compete with the United States. These tactics have included university research partnerships, cyber espionage, insider threats, and both foreign investments in U.S. companies and aggressive headhunting of those companies’ employees to facilitate technology transfer. The NSM builds upon efforts to combat these gray zone tactics by reassessing U.S. intelligence priorities with an eye toward focusing on the U.S. AI ecosystem, strengthening inbound investment screening, and directing the Intelligence Community (IC) to focus on risks to the AI supply chain. If President Trump does not elect to repeal the NSM in its entirety, it seems likely that he will build upon each of these NSM provisions, although potentially applying an approach that more explicitly targets China.33
The NSM requires a review and recommendations for revision of the Biden administration’s intelligence priorities, incorporating into those priorities risks to the U.S. AI ecosystem and enabling sectors.34 The recommendations, which the White House will likely complete before the inauguration,35 will likely help inform the incoming administration’s intelligence priorities. Though this implementation will not make headlines, it could significantly strengthen the incoming administration’s enforcement efforts by enabling better strategies and targeting decisions for export controls, tariffs (including possible component tariffs), outbound investment restrictions, and other measures.
Additionally, the NSM strengthens inbound investment screening by requiring the Committee on Foreign Investment in the United States (CFIUS) to consider, as part of its screenings, whether a given transaction involves foreign actor access to proprietary information related to any part of the AI lifecycle.36 This provision is consistent with President Trump’s strengthening of CFIUS during his first term—both by championing the Foreign Investment Risk Review Modernization Act (FIRRMA), which expanded CFIUS’s jurisdiction and review process, and by increasing scrutiny on foreign acquisitions of U.S. tech companies, including semiconductor companies. By specifically requiring an analysis of risks related to access of proprietary AI information, this provision seems likely to increase scrutiny of AI-relevant foreign investments covered by CFIUS37 and to make it more likely that AI-related transactions will be blocked.
The NSM also requires the Intelligence Community to identify critical AI supply chain nodes, determine methods to disrupt or compromise those nodes, and act to mitigate related risks.38 This directive is consistent with the first Trump administration’s aggressive approach to the use of export controls—including through authorities from the Export Control Review Act (ECRA), which President Trump signed into law—and diplomacy to disrupt China’s ability to manufacture or acquire critical AI supply chain components. It also parallels Trump-era efforts to secure telecommunications supply chains. Increased IC scrutiny of AI supply chain nodes may provide intelligence allowing the United States and its allies and partners to better leverage their supply chain advantages, just as the Biden administration has attempted to do through multiple new export controls.
Based on these consistencies across the last two administrations and public statements from President Trump’s incoming U.S. Trade Representative Jamieson Greer, the new administration seems poised to double down on the NSM’s combined efforts to protect against Chinese and other adversarial threats to the U.S. AI ecosystem. Congress also appears amenable to further strengthening the President’s ECRA authorities in support of possible Trump administration efforts—to cover AI systems and cloud compute providers that enable the training of AI models.39 However, President Biden’s recent export controls have met with opposition from major players in the semiconductor supply chain and conservative open-source advocates. President Trump could also potentially use such restrictions as bargaining chips, easing restrictions in order to secure concessions from foreign competitors in other policy areas.
Because the NSM’s provisions related to foreign threats do not significantly affect open-source models, they seem unlikely to provoke many objections from the incoming administration, except to the extent that they do not go far enough or avoid explicitly identifying China.40 This does not necessarily mean that President Trump will avoid repealing them, however, as it remains possible that the incoming administration will find it more convenient to repeal the entire document and replace provisions as necessary than to pick and choose its targets.
Infrastructure
President Trump seems likely to expand upon or at least continue the NSM’s provisions that focus on developing the energy infrastructure necessary to meet expected future AI power needs (without the Biden Administration’s focus on clean power), strengthening domestic chip production, and making AI resources accessible to diverse actors while increasing the government’s efficiency in using its own AI resources.
Bipartisan consensus exists around the need to build the infrastructure required to facilitate the development of next-generation AI systems. President Trump has already signaled that this issue will be one of his top priorities, and President Biden recently issued an Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure.41 Although both parties recognize the importance of AI infrastructure, President Trump’s team has indicated that they intend to adopt a modified “energy dominance” version of this priority. Where the Biden administration sees the United States as being at risk of falling behind without additional clean power, the Trump administration views the nation as already behind the curve and needing to address its energy deficit with all types of power, including fossil fuels and nuclear energy. The incoming administration also sees power generation and the resulting lower energy prices as a potential asymmetric advantage for the United States in the “A.I. arms race.” Therefore, President Trump seems likely to significantly expand on the Biden administration’s efforts to provide power for the U.S. AI ecosystem. With respect to the NSM, this likely means that the provision requiring the White House Chief of Staff to coordinate the streamlining of permits, approvals, and incentives for the construction of AI-enabling infrastructure and supporting assets42 will survive, unless the NSM is repealed in its entirety.
Though the NSM does not focus on U.S. chip production infrastructure, National Security Advisor Sullivan pointed to the progress already made through the CHIPS and Science Act’s “generational investment in [U.S.] semiconductor manufacturing” in his speech announcing the memorandum. Under the incoming administration, however, the survival of that bipartisan effort is somewhat uncertain. While the legislation has received significant support from Republican members of Congress, President Trump has criticized the bill as being less efficient than tariffs, and he could delay or block the distribution of promised funds to chip companies. However, the concept of incentivizing foreign chip firms to build fabs in the United States was originally devised during the first Trump administration, and the first investment of Taiwan Semiconductor Manufacturing Company (TSMC) in Arizona came during Trump’s first term in office. Some commentators have argued that, at least for many segments of the chip industry, tariffs alone will not solve the United States’ chip problem. Given strong Republican support for CHIPS Act-funded U.S. factories and the national security case for such investments, it seems most likely that the incoming administration will continue to advance many of the bill’s infrastructure goals. Instead of attempting a broad reversal, President Trump might remove requirements from application guidelines that mandate that funding recipients provide child care, encourage union labor, and demonstrate environmental responsibility, including using renewable energy to operate their facilities.
The NSM also requires agencies to consider AI needs in their construction and renovation of federal compute facilities;43 begin a federated AI and data sources pilot project;44 and distribute compute, data, and other AI assets to actors who would otherwise lack access.45 As these assignments seem largely consistent with prior Trump efforts, they appear more likely than not to continue. The AI construction assessment requirement and the federated AI pilot appear to align with the incoming administration’s focus on efficiency. Additionally, President Trump signed into law the legislation that began the National AI Research Resource (NAIRR) during his first term and may continue to support its mission of democratizing access to AI assets, although potentially not at the levels requested by the Biden Administration’s Director of the Office of Science and Technology Policy.
Talent and Immigration
Whether the NSM provisions relating to high-skilled immigration survive under the new administration is uncertain, but non-immigration initiatives focused on AI talent seem likely to survive.
The NSM aims to better recruit and retain AI talent at national security agencies by revising federal hiring and retention policies to accelerate responsible AI adoption,46 identifying education and training opportunities to increase the AI fluency across the national security workforce,47 establishing a National Security AI Executive Talent Committee,48 and conducting “an analysis of the AI talent market in the United States and overseas” to inform future AI talent policy choices.49 These initiatives seem consistent with actions taken in the previous Trump administration and, therefore, likely to survive in some form. For example, President Trump’s signature AI for the American Worker initiative focused on training and upskilling workers with AI-relevant skills. President Trump also signed the bill into law that established the National Security Commission on AI, which completed the most significant government analysis of the AI national security challenge and whose final report emphasized the importance of recruiting and retaining AI talent within the government’s national security enterprise.
The NSM also seeks to better compete for AI talent by directing relevant agencies both to “use all available legal authorities to assist in attracting and rapidly bringing to the United States” individuals who would increase U.S. competitiveness in “AI and related fields”50 and to convene agencies to “explore actions for prioritizing and streamlining administrative processing operations for all visa applicants working with sensitive technologies.”51 Specifically, this effort would likely involve continued work to expand and improve the H-1B visa process, as well as other potential skilled immigration pathways and policies like O-1A and J-1 visas, Optional Practical Training, the International Entrepreneur Rule, and the Schedule A list.
President Trump’s position on high-skilled immigration and specifically the H-1B program appears to have softened since his first term, but it is unclear to what degree and how that will affect his policy decisions. On the campaign trail this year, President Trump stated his support for providing foreign graduates of U.S. universities and even “junior colleges” with green cards to stay in the United States, although his campaign later walked back the statement and clarified the need for “the most aggressive vetting process in U.S. history” before permitting graduates to stay. President Trump’s Senior Policy Advisor for AI Sriram Krishnan is a strong supporter of H-1B visa expansion. And most significantly, in response to the fiery online debate following the Krishnan announcement between President Trump’s pro-H-1B advisors Elon Musk and Vivek Ramaswamy and prominent critics of H-1B like Steve Bannon and Laura Loomer, President Trump reaffirmed his support for H-1B visas, saying, “we need smart people coming into our country.”
However, a significant portion of President Trump’s political base would prefer to shrink the H-1B program, as he did during his first term to protect American jobs. During that first term, President Trump repeatedly cut H-1B visas for skilled immigrants, including through his “Buy American, Hire American” Executive Order and interim H-1B program revision.52 His former Senior Advisor Stephen Miller and U.S. Immigration and Customs Enforcement Director Tom Homan, who were major proponents of these H-1B cuts, will serve in the new administration as Homeland Security Advisor and “border czar,” respectively. These posts will likely allow them to exert significant influence on the President’s immigration decisions and, potentially, to prevail over Trump’s supporters in Silicon Valley and other potential proponents of highly skilled immigration like Jared Kushner and UN Ambassador nominee Elise Stefanik. Additionally, cracking down on immigration in order to “put American workers first” was a core element of the 2024 Republican Party platform.
One key early indicator of the direction President Trump leans will be his decision to continue or attempt to roll back the Biden administration’s long-awaited revision to the H-1B program, which went into effect the last business day before President Trump’s inauguration and includes attempts to streamline the approvals process, increase flexibility, and strengthen oversight.
Conclusion
It is clear that AI will be a key part of the incoming administration’s national security policy. Across his campaign, President Trump prioritized developing domestic AI infrastructure, particularly energy production, and, since winning the election, he has prioritized the appointment of multiple high-level AI advisors.
However, while some of the incoming administration’s responses to the NSM seem locked in—notably, removing provisions relating to discrimination and bias, building on the NSM’s shift toward increasing U.S. power production to support AI energy needs, and continued efforts to slow China’s development of advanced AI systems—there are also key areas where the administration’s responses remain uncertain. Regardless of how the Trump administration’s policies at the intersection of AI and national security shake out, its response to the NSM will serve as a useful early indicator of what direction those policies will take.