Blog Post | 
April 2026

Radical Optionality: Governing Transformative AI Under Uncertainty

We are pleased to announce a new essay by LawAI Director Christoph Winter and Senior Research Fellow Charlie Bullock: Radical Optionality: Governing Transformative AI Under Uncertainty.

The prospect of “transformative AI” appears to present policymakers with a dilemma: overregulation could stifle innovation and forfeit the potential benefits of the technology, while a failure to regulate appropriately could have disastrous implications for public safety and national security.

It’s true that security and innovation are sometimes in tension. Some safety measures do impose costs on innovation, and some forms of deregulation do carry genuine risks. But there is also a class of policies that would meaningfully increase safety without imposing significant costs on innovation. We argue that governments should aggressively implement these policies; this is the main thrust of the governance strategy discussed in this essay, which we call “radical optionality.”

At its core, radical optionality is about preserving democratic governments’ ability to make good decisions about how to govern transformative AI systems as circumstances evolve. In the short term, this means avoiding overregulation while rapidly building the institutions, information channels and legal authorities needed to respond competently to a broad range of scenarios.

The argument for focusing on optionality is simple, and—if you accept a few reasonable assumptions—compelling. These assumptions are:

  1. That there is a real possibility of transformative AI (defined as “AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution”) being developed within the next ten years;
  2. That profound uncertainty exists as to what capabilities transformative AI systems will possess, what benefits and risks they will generate, and what the best ways for society to capture the benefits while mitigating the risks will be;
  3. That a transformatively impactful dual-use technology with significant national security implications will inevitably require some degree of government oversight; and
  4. That building the institutional capacity required to effectively govern transformative AI systems will take years, and that society therefore cannot afford to wait until transformative capabilities have actually been developed.

Justifying the first assumption is beyond the scope of this paper. Whether “AGI” or “superintelligence” or “powerful AI” or “transformative AI” will ever arrive, and when, are questions that have been debated extensively elsewhere. But if you believe that transformative AI is possible, we hope to demonstrate that the case for radical efforts to preserve optionality is overwhelmingly strong.

Read the essay at https://radical-optionality.ai/

Share
Radical Optionality: Governing Transformative AI Under Uncertainty
Radical Optionality: Governing Transformative AI Under Uncertainty