Practical legal analysis for AI policy

The Institute for Law & AI (LawAI) is an independent think tank that researches and advises on the legal challenges posed by artificial intelligence. We believe that sound legal analysis will promote security, welfare, and the rule of law.

Artificial intelligence raises difficult questions for legal systems. Our work examines how laws, institutions, and policymakers could respond as AI systems become more capable and widely deployed. We approach these questions with rigorous analysis and a willingness to engage with both technical detail and novel legal ideas.

Our work

Through our research and consulting, LawAI is creating legal infrastructure for transformative AI across three specialized teams: 

  • US Law and Policy: Our team provides practical legal research to improve the quality of current AI policy efforts. Our real-time analysis and strategic advice has proved central to key debates in AI policy and made us a go-to resource for policymakers, industry, and civil society. 
  • EU Law: Our current research focuses on general-purpose AI models and enforcement issues related to the EU AI Act and the General-Purpose AI Code of Practice. This work has significantly shaped regulatory implementation in the European Union. 
  • Legal Frontiers: Our team pursues foundational research, pioneering concepts like Law-Following AI and Chips for Peace

This structure—combining teams focused on immediate policy needs with one dedicated to breakthrough research—enables us to both respond to today’s urgent challenges and build the frameworks needed for tomorrow’s AI systems.Through our applied research, we help policymakers, regulators, think tanks, and industry to think about and craft laws and policies. We work with organizations and people across political lines, and we are not affiliated with any political party.

Our team of legal experts is located around the world. We have two primary hubs in Washington, DC, and Cambridge, UK. 

Our origins

Under the leadership of Director Christoph Winter (Assistant Professor of Law and AI at the University of Cambridge), LawAI has assembled a world-class team. This includes Cullen O’Keefe as Director of Research (Harvard Law, previously leading OpenAI’s Policy Frontiers team) and Mackenzie Arnold as Director of US Policy (Harvard Law, Third Circuit clerk).

LawAI’s roots trace back to Harvard Law School in 2019, where Winter and Eric Martínez (now at the University of Chicago)—with the help of O’Keefe and Jonas Freund (now at the Centre for the Governance of AI)—met and launched the Legal Priorities Project to identify the most impactful areas for legal research. After exploring challenges ranging from pandemic preparedness to institutional design, we became convinced that AI represented the highest-priority legal challenge of our time. This realization, combined with a clear gap in rigorous legal research on artificial intelligence, led us to focus exclusively on AI and evolve into LawAI.

Preparing the next generation of leaders in law and AI

We recognize that the scale of legal issues raised by AI far exceeds what any single organization can address. That’s why we’ve made identifying, training, and connecting exceptional legal minds a core part of our mission. The future of AI law requires experts who understand legal doctrine, technology, and policy—and we work to develop this rare combination of skills. Our Seasonal Research Fellowships attract both top-tier students and leading legal experts and scholars from academia, government, and industry, becoming one of the most prestigious entry points into AI law and policy. 

Alumni of our programs have gone on to senior roles at leading institutions including the US government, the EU AI Office, the UK AISI, leading AI companies, academia, and think tanks across the world. This growing network of legal experts—connected through their time at LawAI—is shaping AI law and policy from multiple angles across research, regulation, industry, and academia.

Operating principles

LawAI’s operating principles guide how we interact with each other and with the world. The principles form the basis of the team culture we aspire to cultivate at LawAI.

  • Our mission comes first.
    LawAI’s mission is to promote security, welfare, and the rule of law in the age of AI. We’re non-partisan, take our independence seriously, and select projects that advance our mission. We pursue transformative impact over incremental gains, focusing our energy on the breakthroughs that truly advance our mission. We’re enthusiastic to work with partners who can make that happen. 
  • We aim to make actionable recommendations.
    Whenever possible, our work should be actionable. We talk frequently with real-world decisionmakers to assess their needs, so we can focus on answering the questions that actually constrain progress. If it’s too early to come to an answer, we try to clarify the option space and identify key decision points. Our goal isn’t to be “interesting,” “erudite,” or vaguely “sensible”—though we sometimes will be. It’s to move the ball forward in tackling real problems.  
  • We focus on getting the details right.
    In law and policy, specifics matter: A single word can fundamentally alter how a law is interpreted and implemented. In both our research and advising, we turn vague wish lists into concrete text, confronting the difficult tradeoffs and nuances that emerge when pen meets paper. We believe that working rigorously through the precise details of complex issues makes us think more critically, clarifies key considerations, and enables progress on the areas that matter. 
  • We are curious about big ideas.
    We recognize that important ideas are often unorthodox or counterintuitive, and we make strategic bets on ideas with big upsides. We create an environment of curiosity where breakthrough thinking thrives by giving team members the security to challenge conventional wisdom and propose ambitious solutions, giving them the license to focus on the big ideas that matter.
  • We pursue excellence with curious optimism.
    We believe that great work and genuine enjoyment are mutually reinforcing. Our collaborative spirit thrives on quick wit, shared laughter, and finding moments of levity even in challenging times. We maintain curious optimism when tackling hard problems, choosing constructive engagement over cynicism. While our mission is serious, we don’t have to be serious all the time.
  • We practice trust and kindness.
    We trust our team members and empower them to work through big ideas and collaborate across teams. We treat each other with genuine kindness and create an environment where people feel supported. We’re generous with flexibility around mental health, productivity support, and family obligations, knowing that each person has unique needs. By fostering a culture where people feel trusted and cared for, we create the conditions for exceptional work and genuine fulfillment.
  • We seek the truth with nuance and humility.
    Hard problems typically don’t have simple solutions. The development and regulation of AI in particular is characterized by considerable uncertainty and complex tradeoffs that must be acknowledged and incorporated into our work. We test our ideas against evidence and update our positions when we’re wrong. We evaluate facts candidly, even when inconvenient, and encourage openness and transparency in hopes that someone else points out our errors. We cultivate curiosity and humility to find the right answer, knowing full well that we often won’t get it right on the first try.
  • We aim to communicate with clarity.
    Reasoning transparently makes all of us better. We write clearly so decision-makers can evaluate their options and come to their own conclusions. This is especially important as we rely on building lasting trust and relationships with our partners, clients, and collaborators. Internally, we give direct, constructive feedback to sharpen our thinking and improve our work. We welcome frank, good faith disagreement in service of finding the right answer.
  • We balance urgency with patience.
    AI law and policy moves at breakneck speed—windows of opportunity open and close without warning. We’ve built the capacity to respond quickly when needed, but we also know that preparation is key. We invest in research streams before they become “hot topics” and give ourselves time to work through complex questions. We’re willing to pass on small wins and quick publicity in favor of positioning ourselves for the breakthroughs that matter. This balance allows us to be both responsive to immediate needs and strategic about achieving transformative impact.

Get involved

If the above resonates with you, there are a number of ways to join our community of researchers and practitioners.