Chips for Peace: how the U.S. and its allies can lead on safe and beneficial AI
This piece was originally published in Lawfare.
The United States and its democratic allies can lead in AI and use this position to advance global security and prosperity.
On Dec. 8, 1953, President Eisenhower addressed the UN General Assembly. In his “Atoms for Peace” address, he set out the U.S. view on the risks and hopes for a nuclear future, leveraging the U.S.’s pioneering lead in that era’s most critical new technology in order to make commitments to promote its positive uses while mitigating its risks to global security. The speech laid the foundation for the international laws, norms, and institutions that have attempted to balance nuclear safety, nonproliferation of nuclear weapons, and peaceful uses of atomic energy ever since.
As a diverse class of largely civilian technologies, artificial intelligence (AI) is unlike nuclear technology in many ways. However, at the extremes, the stakes of AI policy this century might approach those of nuclear policy last century. Future AI systems may have the potential to unleash rapid economic growth and scientific advancement —or endanger all of humanity.
The U.S. and its democratic allies have secured a significant lead in AI supply chains, development, deployment, ethics, and safety. As a result, they have an opportunity to establish new rules, norms, and institutions that protect against extreme risks from AI while enabling widespread prosperity.
The United States and its allies can capitalize on that opportunity by establishing “Chips for Peace,” a framework with three interrelated commitments to address some of AI’s largest challenges.
First, states would commit to regulating their domestic frontier AI development and deployment to reduce risks to public safety and global security. Second, states would agree to share the benefits of safe frontier AI systems broadly, especially with states that would not benefit by default. Third, states would coordinate to ensure that nonmembers cannot undercut the other two commitments. This could be accomplished through, among other tools, export controls on AI hardware and cloud computing. The ability of the U.S. and its allies to exclude noncomplying states from access to the chips and data centers that enable the development of frontier AI models undergirds the whole agreement, similar to how regulation of highly enriched uranium undergirds international regulation of atomic energy. Collectively, these three commitments could form an attractive package: an equitable way for states to advance collective safety while reaping the benefits of AI-enabled growth.
Three grand challenges from AI
The Chips for Peace framework is a package of interrelated and mutually reinforcing policies aimed at addressing three grand challenges in AI policy.
The first challenge is catastrophe prevention. AI systems carry many risks, and Chips for Peace does not aim to address them all. Instead, Chips for Peace focuses on possible large-scale risks from future frontier AI systems: general-purpose AI systems at the forefront of capabilities. Such “catastrophic” risks are often split into misuse and accidents.
For misuse, the domain that has recently garnered the most attention is biosecurity: specifically, the possibility that future frontier AI systems could make it easier for malicious actors to engineer and weaponize pathogens, especially if coupled with biological design tools. Current generations of frontier AI models are not very useful for this. When red teamers at RAND attempted to use large language model (LLM) assistants to plan a more viable simulated bioweapon attack, they found that the LLMs provided answers that were inconsistent, inaccurate, or merely duplicative of what was readily discoverable on the open internet. It is reasonable to worry, though, that future frontier AI models might be more useful to attackers. In particular, lack of tacit knowledge may be an important barrier to successfully constructing and implementing planned attacks. Future AI models with greater accuracy, scientific knowledge, reasoning capabilities, and multimodality may be able to compensate for attackers’ lack of tacit knowledge by providing real-time tailored troubleshooting assistance to attackers, thus narrowing the gap between formulating a plausible high-level plan and “successfully” implementing it.
For accidental harms, the most severe risk might come from future increasingly agentic frontier AI systems: “AI systems that can pursue complex goals with limited direct supervision” through use of computers. Such a system could, for example, receive high-level goals from a human principal in natural language (e.g., “book an island getaway for me and my family next month”), formulate a plan about how to best achieve that goal (e.g., find availability on family calendars, identify possible destinations, secure necessary visas, book hotels and flights, arrange for pet care), and take or delegate actions necessary to execute on that plan (e.g., file visa applications, email dog sitters). If such agentic systems are invented and given more responsibility than managing vacations—such as managing complex business or governmental operations—it will be important to ensure that they are easily controllable. But our theoretical ability to reliably control these agentic AI systems is still very limited, and we have no strong guarantee that currently known methods will work for smarter-than-human AI agents, should they be invented. Loss of control over such agents might entail inability to prevent them from harming us.
Time will provide more evidence about whether and to what extent these are major risks. However, for now there is enough cause for concern to begin thinking about what policies could reduce the risk of such catastrophes, should further evidence confirm the plausibility of these harms and justify actual state intervention.
The second—no less important—challenge is ensuring that the post-AI economy enables shared prosperity. AI is likely to present acute challenges to this goal. In particular, AI has strong tendencies towards winner-take-all dynamics, meaning that, absent redistributive efforts, the first countries to develop AI may reap an outsized portion of its benefit and make catch-up growth more difficult. If AI labor can replace human labor, then many people may struggle to earn enough income, including the vast majority of people who do not own nearly enough financial assets to live off of. I personally think using the economic gains from AI to uplift the entire global economy is a moral imperative. But this would also serve U.S. national security. A credible, U.S.-endorsed vision for shared prosperity in the age of AI can form an attractive alternative to the global development initiatives led by China, whose current technological offerings are undermining the U.S.’s goals of promoting human rights and democracy, including in the Global South.
The third, meta-level challenge is coordination. A single state may be able to implement sensible regulatory and economic policies that address the first two challenges locally. But AI development and deployment are global activities. States are already looking to accelerate their domestic AI sectors as part of their grand strategy, and they may be tempted to loosen their laws to attract more capital and talent. They may also wish to develop their own state-controlled AI systems. But if the price of lax AI regulation is a global catastrophe, all states have an interest in avoiding a race to the bottom by setting and enforcing strong and uniform baseline rules.
The U.S.’s opportunity to lead
The U.S. is in a strong position to lead an effort to address these challenges, for two main reasons: U.S. leadership throughout much of the frontier AI life cycle and its system of alliances.
The leading frontier AI developers—OpenAI (where, for disclosure, I previously worked), Anthropic, Google DeepMind, and Meta—are all U.S. companies. The largest cloud providers that host the enormous (and rising) amounts of computing power needed to train a frontier AI model—Amazon, Microsoft, Google, and Meta—are also American. Nvidia chips are the gold standard for training and deploying large AI models. A large, dynamic, and diverse ecosystem of American AI safety, ethics, and policy nonprofits and academic institutions have contributed to our understanding of the technology, its impacts, and possible safety interventions. The U.S. government has invested substantially in AI readiness, including through the CHIPS Act, the executive order on AI, and the AI Bill of Rights.
Complementing this leadership is a system of alliances linking the United States with much of the world. American leadership in AI depends on the notoriously complicated and brittle semiconductor supply chain. Fortunately, however, key links in that supply chain are dominated by the U.S. or its democratic allies in Asia and Europe. Together, these countries contribute more than 90 percent of the total value of the supply chain. Taiwan is the home to TSMC, which fabricates 90 percent of advanced AI chips. TSMC’s only major competitors are Samsung (South Korea) and Intel (U.S.). The Netherlands is home to ASML, the world’s only company capable of producing the extreme ultraviolet lithography tools needed to make advanced AI chips. Japan, South Korea, Germany, and the U.K. all hold key intellectual property or produce key inputs to AI chips, such as semiconductor manufacturing equipment or chip wafers. The U.K. has also catalyzed global discussion about the risks and opportunities from frontier AI, starting with its organization of the first AI Safety Summit last year and its trailblazing AI Safety Institute. South Korea recently hosted the second summit, and France will pick up that mantle later this year.
These are not just isolated strengths—they are leading to collective action. Many of these countries have been coordinating with the U.S. on export controls to retain control over advanced computing hardware. The work following the initial AI Safety Summit—including the Bletchley Declaration, International Scientific Report on the Safety of Advanced AI, and Seoul Declaration—also shows increased openness to multilateral cooperation on AI safety.
Collectively, the U.S. and its allies have a large amount of leverage over frontier AI development and deployment. They are already coordinating on export controls to maintain this leverage. The key question is how to use that leverage to address this century’s grand challenges.
Chips for Peace: three commitments for three grand challenges
Chips for Peace is a package of three commitments—safety regulation, benefit-sharing, and nonproliferation—which complement and strengthen each other. For example, benefit-sharing compensates states for the costs associated with safety regulation and nonproliferation, while nonproliferation prevents nonmembers from undermining the regulation and benefit-sharing commitments. While the U.S. and its democratic allies would form the backbone of Chips for Peace due to their leadership in AI hardware and software, membership should be open to most states that are willing to abide by the Chips for Peace package.
Safety regulation
As part of the Chips for Peace package, members would first commit to implementing domestic safety regulation. Member states would commit to ensuring that any frontier AI systems developed or deployed within their jurisdiction must meet consistent safety standards narrowly tailored to prevent global catastrophic risks from frontier AI. Monitoring of large-scale compute providers would enable enforcement of these standards.
Establishing a shared understanding of catastrophic risks from AI is the first step toward effective safety regulation. There is already exciting consensus formation happening here, such as through the International Scientific Report on the Safety of Advanced AI and the Seoul Declaration.
The exact content of safety standards for frontier AI is still an open question, not least because we currently do not know how to solve all AI safety problems. Current methods of “aligning” (i.e., controlling) AI behavior rely on our ability to assess whether that behavior is desirable. For behaviors that humans can easily assess, such as determining whether paragraph-length text outputs are objectionable, we can use techniques such as reinforcement learning from human feedback and Constitutional AI. These techniques already have limitations. These limitations may become more severe as AI systems’ behaviors become more complicated and therefore more difficult for humans to evaluate.
Despite our imperfect knowledge of how to align AI systems, there are some frontier AI safety recommendations that are beginning to garner consensus. One emerging suggestion is to start by evaluating such models for specific dangerous capabilities prior to their deployment. If a model lacks capabilities that meaningfully contribute to large-scale risks, then it should be outside the jurisdiction of Chips for Peace and left to individual member states’ domestic policy. If a model has dangerous capabilities sufficient to pose a meaningful risk to global security, then there should be clear rules about whether and how the model may be deployed. In many cases, basic technical safeguards and traditional law enforcement will bring risk down to a sufficient level, and the model can be deployed with those safeguards in place. Other cases may need to be treated more restrictively. Monitoring the companies using the largest amounts of cloud compute within member states’ jurisdictions should allow states to reliably identify possible frontier AI developers, while imposing few constraints on the vast majority of AI development.
Benefit-sharing
To legitimize and drive broad adoption of Chips for Peace as a whole—and compensate for the burdens associated with regulation—members would also commit to benefit-sharing. States that stand to benefit the most from frontier AI development and deployment by default would be obligated to contribute to programs that ensure benefits from frontier AI are broadly distributed, especially to member states in the Global South.
We are far from understanding what an attractive and just benefit-sharing regime would look like. “Benefit-sharing,” as I use the term, is supposed to encompass many possible methods. Some international regulatory regimes, like the International Atomic Energy Agency (IAEA), contain benefit-sharing programs that provide some useful precedent. However, some in the Global South understandably feel that such programs have fallen short of their lofty aspirations. Chips for Peace may also have to compete with more laissez-faire offers for technological aid from China. To make Chips for Peace an attractive agreement for states at all stages of development, states’ benefit-sharing commitments will have to be correspondingly ambitious. Accordingly, member states likely to be recipients of such benefit-sharing should be in the driver’s seat in articulating benefit-sharing commitments that they would find attractive and should be well represented from the beginning in shaping the overall Chips for Peace package. Each state’s needs are likely to be different, so there is not likely to be a one-size-fits-all benefit-sharing policy. Possible forms of benefit-sharing from which such states could choose could include subsidized access to deployed frontier AI models, assistance tailoring models to local needs, dedicated inference capacity, domestic capacity-building, and cash.
A word of caution is warranted, however. Benefit-sharing commitments need to be generous enough to attract widespread agreement, justify the restrictive aspects of Chips for Peace, and advance shared prosperity. But poorly designed benefit-sharing could be destabilizing, such as if it enabled the recipient state to defect from the agreement but still walk away with shared assets (e.g., compute and model weights) and thus undermine the nonproliferation goals of the agreement. Benefit-sharing thus needs to be simultaneously empowering to recipient states and robust to their defection. Designing technical and political tools that accomplish both of these goals at once may therefore be crucial to the viability of Chips for Peace.
Nonproliferation
A commitment to nonproliferation of harmful or high-risk capabilities would make the agreement more stable. Member states would coordinate on policies to prevent non-member states from developing or possessing high-risk frontier AI systems and thereby undermining Chips for Peace.
Several tools will advance nonproliferation. The first is imposing cybersecurity requirements that prevent exfiltration of frontier AI model weights. Second, more speculatively, on-chip hardware mechanisms could prevent exported AI hardware from being used for certain risky purposes.
The third possible tool is export controls. The nonproliferation aspect of Chips for Peace could be a natural broadening and deepening of the U.S.’s ongoing efforts to coordinate export controls on AI chips and their inputs. These efforts rely on the cooperation of allies. Over time, as this system of cooperation becomes more critical, these states may want to formalize their coordination, especially by establishing procedures that check the unilateral impulses of more powerful member states. In this way, Chips for Peace could initially look much like a new multilateral export control regime: a 21st-century version of COCOM, the Cold War-era Coordinating Committee for Multilateral Export Controls (the predecessor of the current Wassenaar Arrangement). Current export control coordination efforts could also expand beyond chips and semiconductor manufacturing equipment to include large amounts of cloud computing capacity and the weights of models known to present a large risk. Nonproliferation should also include imposition of security standards on parties possessing frontier AI models. The overall goal would be to reduce the chance that nonmembers can indigenously develop, otherwise acquire (e.g., through espionage or sale), or access high-risk models, except under conditions multilaterally set by Chips for Peace states-parties.
As the name implies, this package of commitments draws loose inspiration from the Treaty on the Non-Proliferation of Nuclear Weapons and the IAEA. Comparisons to these precedents could also help Chips for Peace avoid some of the missteps of past efforts.
Administering Chips for Peace
How would Chips for Peace be administered? Perhaps one day we will know how to design an international regulatory body that is sufficiently accountable, legitimate, and trustworthy for states to be willing to rely on it to directly regulate their domestic AI industries. But this currently seems out of reach. Even if states perceive international policymaking in this domain as essential, they are understandably likely to be quite jealous of their sovereignty over their domestic AI industries.
A more realistic approach might be harmonization backed by multiple means of verifying compliance. States would come together to negotiate standards that are promulgated by the central intergovernmental organization, similar to the IAEA Safety Standards or Financial Action Task Force (FATF) Recommendations. Member states would then be responsible for substantial implementation of these standards in their own domestic regulatory frameworks.
Chips for Peace could then rely on a number of tools to detect and remedy member state noncompliance with these standards and thus achieve harmonization despite the international standards not being directly binding on states. The first would be inspections or evaluations performed by experts at the intergovernmental organization itself, as in the IAEA. The second is peer evaluations, where member states assess each other’s compliance. This is used in both the IAEA and the FATF. Finally, and often implicitly, the most influential member states, such as the U.S., use a variety of tools—including intelligence, law enforcement (including extraterritorially), and diplomatic efforts—to detect and remedy policy lapses.
The hope is that these three approaches combined may be adequate to bring compliance to a viable level. Noncompliant states would risk being expelled from Chips for Peace and thus cut off from frontier AI hardware and software.
Open questions and challenges
Chips for Peace has enormous potential, but an important part of ensuring its success is acknowledging the open questions and challenges that remain. First, the analogy between AI chips and highly enriched uranium (HEU) is imperfect. Most glaringly, AI models (and therefore AI chips) have a much wider range of beneficial and benign applications than HEU. Second, we should be skeptical that implementing Chips for Peace will be a simple matter of copying the nuclear arms control apparatus to AI. While we can probably learn a lot from nuclear arms control, nuclear inspection protocols took decades to evolve, and the different technological features of large-scale AI computing will necessitate new methods of monitoring, verifying, and enforcing agreements.
Which brings us to the challenge of monitoring, verification, and enforcement (MVE) more generally. We do not know whether and how MVE can be implemented at acceptable costs to member states and their citizens. There are nascent proposals for how hardware-based methods could enable highly reliable and (somewhat) secrecy-preserving verification of claims about how AI chips have been used, and prevent such chips from being used outside an approved setting. But we do not yet know how robust these mechanisms can be made, especially in the face of well-resourced adversaries.
Chips for Peace probably works best if most frontier AI development is done by private actors, and member states can be largely trusted to regulate their domestic sectors rigorously and in good faith. But these assumptions may not hold. In particular, perceived national security imperatives may drive states to become more involved in frontier AI development, such as through contracting for, modifying, or directly developing frontier AI systems. Asking states to regulate their own governmental development of frontier AI systems may be harder than asking them to regulate their private sectors. Even if states are not directly developing frontier AI systems, they may also be tempted to be lenient toward their national champions to advance their security goals.
Funding has also been a persistent issue in multilateral arms control regimes. Chips for Peace would likely need a sizable budget to function properly, but there is no guarantee that states will be more financially generous in the future. Work toward designing credible and sustainable funding mechanisms for Chips for Peace could be valuable.
Finally, although I have noted that the U.S.’s democratic allies in Asia and Europe would form the core of Chips for Peace due to their collective ability to exclude parties from the AI hardware supply chain, I have left open the question of whether membership should be open only to democracies. Promoting peaceful and democratic uses of AI should be a core goal of the U.S. But the challenges from AI can and likely will transcend political systems. China has shown some initial openness to preventing competition in AI from causing global catastrophe. China is also trying to establish an independent semiconductor ecosystem despite export controls on chips and semiconductor manufacturing equipment. If these efforts are successful, Chips for Peace would be seriously weakened unless China was admitted. As during the Cold War, we may one day have to create agreements and institutions that cross ideological divides in the shared interest of averting global catastrophe.
While the risk of nuclear catastrophe still haunts us, we are all much safer due to the steps the U.S. took last century to manage this risk.
AI may bring risks of a similar magnitude this century. The U.S. may once again be in a position to lead a broad, multilateral coalition to manage these enormous risks. If so, a Chips for Peace model may manage those risks while advancing broad prosperity.
LawAI’s thoughts on proposed updates to U.S. federal benefit-cost analysis
This analysis is based on a comment submitted in response to the Request for Comment on proposed Circular A-4, “Regulatory Analysis”.
Abstract
This piece is LawAI’s thoughts on proposed Circular A-4, “Regulatory Analysis”. The Office of Management and Budget (“OMB” or “Office”) issued the Public Notice to receive guidance on “conducting high-quality and evidence-based regulatory analysis” in order to assist agencies evaluating the benefits and costs of regulations subject to the review pursuant to Executive Orders 12866 and 13563.
We support the many important and substantial reforms to the regulation review process in the proposed Circular A-4. The reforms, if adopted, would reduce the odds of regulations imposing undue costs on vulnerable, underrepresented, and disadvantaged communities both now and well into the future. In this piece, we outline a few additional changes that would further reduce those odds: expanding the scope of analysis to include catastrophic and existential risks, including those far in the future; including future generations in distributional analysis; providing more guidance regarding model uncertainty and regulations that involve irreversible outcomes; lowering the discount rate to zero for irreversible effects; and in a narrow set of cases or, minimally, lowering the discount rate in proportion to the temporal scope of a regulation.
We support the many important and substantial reforms to the regulation review process in the proposed Circular A-4. The reforms, if adopted, would reduce the odds of regulations imposing undue costs on vulnerable, underrepresented, and disadvantaged communities both now and well into the future. In this piece, we outline a few additional changes that would further reduce those odds: expanding the scope of analysis to include catastrophic and existential risks, including those far in the future; including future generations in distributional analysis; providing more guidance regarding model uncertainty and regulations that involve irreversible outcomes; lowering the discount rate to zero for irreversible effects; and in a narrow set of cases or, minimally, lowering the discount rate in proportion to the temporal scope of a regulation.
1. Circular A-4 contains many improvements, including consideration of global impacts, expanding the temporal scope of analysis, and recommendations on developing an analytical baseline.
Circular A-4 contains many improvements on the current approach to benefit-cost analysis (BCA). In particular, the proposed reforms would allow for a more comprehensive understanding of the myriad risks posed by any regulation. The guidance for analysis to include global impacts[ref 1] will more accurately account for the effects of a regulation on increasingly interconnected and interdependent economic, political, and environmental systems. Many global externalities, such as pandemics and climate change, require international regulatory cooperation; in these cases, efficient allocation of global resources, which benefits the United States and its citizens and residents, requires all countries to consider global costs and benefits.[ref 2]
The instruction to tailor the time scope of analysis to “encompass all the important benefits and costs likely to result from regulation” will likewise bolster the quality of a risk assessment[ref 3]—though, as mentioned below, a slight modification to this instruction could aid regulators in identifying and mitigating existential risks posed by regulations.
The recommendations on developing an analytic baseline have the potential to increase the accuracy and comprehensiveness of BCA by ensuring that analysts integrate current and likely technological developments and the resulting harms of those developments into their baseline.[ref 4]
A number of other proposals would also qualify as improvements on the status quo. A litany of commentors have discussed those proposals, so the remainder of this piece is reserved for suggested amendments and recommendations for topics worthy of additional consideration.
2. The footnote considering catastrophic risks is a welcome addition that could be further strengthened with a minimum time frame of analysis and clear inclusion of catastrophic and existential threats in “important” and “likely” benefits and costs.
The proposed language will lead to a more thorough review of the benefits and costs of a regulation by expanding the time horizon over which those effects are assessed.[ref 5] We particularly welcome the footnote encouraging analysts to consider whether a regulation that involves a catastrophic risk may impose costs on future generations.[ref 6]
We recommend two suggestions to further strengthen the purpose of this footnote in encouraging the consideration of catastrophic and existential risks and the long-run effects of related regulation. First, we recommend mandating consideration of long-run effects of a regulation.[ref 7] Given the economic significance of a regulation that triggers review under Executive Orders 12866 and 13563, as supplemented and reaffirmed by Executive Order 14094, the inevitable long-term impacts deserve consideration—especially because regulations of such size and scope could affect catastrophic and existential risks that imperil future generations. Thus, the Office should consider establishing a minimum time frame of analysis to ensure that long-run benefits and costs are adequately considered, even if they are sometimes found to be negligible or highly uncertain.
Second, the final draft should clarify what constitutes an “important” benefit and cost as well as when those effects will be considered “likely”.[ref 8] We recommend that those concepts clearly encompass potential catastrophic or existential threats, even those that have very low likelihood.[ref 9] An expansive definition of both qualifiers would allow the BCA to provide stakeholders with a more complete picture of the regulation’s short- and long-term impact.
3. Distributional analysis should become the default of regulatory review and include future generations as a group under consideration.
The potential for disparate effects of regulations on vulnerable, underrepresented, and disadvantaged groups merits analysis in all cases. Along with several other commentors, we recommend that distributional analysis become the default of any regulatory review. When possible, we further recommend that such analysis include future generations among the demographic categories.[ref 10] Future generations have no formal representation and will bear the costs imposed by any regulation for longer than other groups.[ref 11]
The Office should also consider making this analysis mandatory, with no exceptions. Such a mandate would reduce the odds of any group unexpectedly bearing a disproportionate and unjust share of the costs of a regulation. The information generated by this analysis would also give groups a more meaningfully informed opportunity to engage in the review of regulations.
4. Treatment of uncertainty is crucial for evaluating long-term impacts and should include more guidance regarding models, model uncertainty, and regulations that involve irreversible outcomes.
Circular A-4 directs agencies to seek out and respond to several different types of uncertainty from the outset of their analysis.[ref 12] This direction will allow for a more complete understanding of the impacts of a regulation both in the short- and long- term. Greater direction would accentuate those benefits.
The current model uncertainty guidance, largely confined to a footnote, nudges agencies to “consider multiple models to establish robustness and reduce model uncertainty.”[ref 13] The brevity of this instruction conflicts with the complexity of this process. Absent more guidance, agencies may be poorly equipped to assess and treat uncertainty, which will frustrate the provision of “useful information to decision makers and the public about the effects and the uncertainties of alternative regulatory actions.”[ref 14] A more participatory, equitable, and robust regulation review process hinges on that information.
We encourage the agency to provide further examples and guidance on how to prepare models and address model uncertainty, in particular regarding catastrophic and existential risks, as well as significant benefits and costs in the far future.[ref 15] A more robust approach to responding to uncertainty would include explicit instructions on how to identify, evaluate, and report uncertainty regarding the future. Several commentors highlighted that estimates of costs and benefits become more uncertain over time. We echo and amplify concerns that regulations with forecasted effects on future generations will require more rigorous treatment of uncertainty.
We similarly recommend that more guidance be offered with respect to regulations that involve irreversible outcomes, such as exhaustion of resources or extinction of a species.[ref 16] The Circular notes that such regulations may benefit from a “real options” analysis; however, this simple guidance is inadequate for the significance of the topic. The Circular acknowledges that “[t]he costs of shifting the timing of regulatory effects further into the future may be especially high when regulating to protect against irreversible harms.” We agree that preserving option value for future generations is of immense value. How to value those options should receive more attention in subsequent drafts. Likewise, guidance on how to identify irreversible outcomes and conduct real options analysis merits more attention in forthcoming iterations.
We recommend similar caution for regulations involving harms that are persistent and challenging to reverse, but not irreversible.
5. A lower discount rate and declining discount rate are necessary to account for the impact of regulations with significant and long-term effects on future generations.
The discount rate in a BCA is one signal of how much a society values the future. We join a chorus of commentors in applauding both the overall lowering of the discount rate as well as the idea of a declining discount rate schedule.
The diversity of perspectives in those comments, however, indicate that this topic merits further consideration. In particular, we would welcome further discussion on the merits of a zero discount rate. Though sometimes characterized as a blunt tool to attempt to assist future generations,[ref 17] zero discount rates may become necessary when evaluating regulations that involve irreversible harm.[ref 18] In cases involving irreversibility, a fundamental assumption about discounting breaks down—specifically, that the discounted resource has more value in the present because it can be invested and, as a result, generate more resources in subsequent periods.[ref 19] If the regulation involves the elimination of certain resources, such as nonrenewable resources, rather than their preservation or investment, then the value of the resources remain constant across time periods.[ref 20] Several commentors indicated that they share our concern about such harms, suggesting that they would welcome this narrow use case for zero discount rates.[ref 21]
We likewise support the general concept of declining discount rates and further conversations regarding the declining discount rate (DDR) schedule,[ref 22] given the importance of such schedules in accounting for the impact of regulations with significant and long-term effects on future generations.[ref 23] US adoption of a DDR schedule would bring us into alignment with two peers—namely, the UK and France.[ref 24] The former, which is based on the Ramsey formula rather than a fixed DDR schedule proposed, deserves particular attention given that it estimates time preference ρ as the sum of “pure time preference (δ , delta) and catastrophic risk (L)”,[ref 25] defined in the previous Green Book as the “likelihood that there will be some event so devastating that all returns from policies, programmes or projects are eliminated”.[ref 26] This approach to a declining discount schedule demonstrates the sort of risk aversion, considering catastrophic and existential risk, that is necessary in light of regulations that present significant uncertainty.
6. Regulations that relate to irreversible outcomes, catastrophic risk, or existential risk warrant review as being significant under Section 3(f)(1).
In establishing thresholds for which regulations will undergo regulatory analysis, Section 3(f)(1) of Executive Order 12866 includes a number of sufficient criteria in addition to the increased monetary threshold. We note that regulations that might increase or reduce catastrophic or existential risk should be reviewed as having the potential to “adversely affect in a material way the economy, a sector of the economy, productivity, competition, jobs, the environment, public health or safety, or State, local, territorial, or tribal governments or communities.”[ref 27] Even “minor” regulations can have unintended consequences with major ramifications on our institutions, systems, and norms—those that might influence such grave risks are of particular import. For similar reasons, the Office should also review any regulation that has a reasonable chance of causing irreversible harm to future generations.[ref 28]
7. Conclusion
Circular A-4 contains important and substantial reforms to the regulation review process. The reforms, if adopted, would reduce the odds of regulations imposing undue costs on vulnerable, underrepresented, and disadvantaged communities both now and well into the future. A few additional changes would further reduce those odds—specifically, expanding the scope of analysis to include catastrophic and existential risks, including those far in the future; including future generations in distributional analysis; providing more guidance regarding model uncertainty and regulations that involve irreversible outcomes; lowering the discount rate to zero for irreversible effects; and in a narrow set of cases or, minimally, lowering the discount rate in proportion to the temporal scope of a regulation.