Blog Post | 
July 2025

Christoph Winter’s remarks to the European Parliament on AI Agents and Democracy

Summary

On July 17th, LawAI’s Director and Founder, Christoph Winter, was invited to speak before the European Parliament’s Special Committee on the European Democracy Shield with participation of IMCO and LIBE Committee members. Professor Winter was asked to present on AI governance, regulation and democratic safeguards. He spoke about the democratic challenges that AI agents may present and how democracies could approach these challenges. 

Two recommendations were made to the Committee:

  • Introduce Law-Following AI : AI systems should be built to follow the law. Law-following AI would require AI systems to be architecturally constrained to refuse actions that would be illegal if performed by humans in the same position. Just as AIs are currently trained to decline to help build bombs, they would reject orders to violate constitutional rights or election laws.
  • Strengthen the AI Office: The AI Office needs many more skilled people to rigorously analyze what companies submit under the Code of Practice and AI Act—to scrutinize their risk assessments, verify their mitigation measures, and spot gaps in their safety evaluations.

Transcript

Distinguished Members of Parliament, fellow speakers and experts,

Manipulating public opinion at scale used to require vast resources. This situation is changing quickly. During Slovakia’s 2023 election a simple deepfake audio recording of a candidate discussing vote-buying schemes circulated just 48 hours before polls opened, which was too late for fact-checking, but not too late to reach thousands of voters. And deepfakes are really just the beginning.

AI agents, which are autonomous systems that can act on the internet like skilled human workers, are being developed by all major AI companies. And soon they could be able to simultaneously orchestrate large-scale manipulation campaigns, hack electoral systems, and coordinate cyber-attacks on fact-checkers—all while operating 24/7 at unprecedented scale.

Today, I want to propose two solutions to these democratic challenges. First, requiring AI agents to be Law-following by design. And second, strengthening the AI Office’s capacity to understand and address AI risks. Let me explain each.

Law-following AI requires AI systems to be architecturally constrained to refuse actions that would be illegal if performed by humans in the same position. Just as AIs are currently trained to decline to help build bombs, they would reject orders to violate constitutional rights or election laws.

Law-following AI is democratically compelling for three reasons: First, it is democratically legitimate. Laws represent our collective will, refined through democratic deliberation, rather than unilaterally determined corporate values. Second, it enables democratic adaptability. Laws can be changed through democratic processes, and AI agents designed to follow law can automatically adjust their behavior. Third, it offers a democratic shield—because without these constraints, we risk creating AI agents that blindly follow orders, and history has shown where blind obedience leads.

In practice, this would mean that AI agents bound by law would refuse orders to suppress political speech, manipulate elections, blackmail officials, or harass dissidents. This way, law-following AI could prevent authoritarian actors from using obedient AI agents to entrench their power. Of course, it can’t prevent all forms of manipulation—much harmful persuasion operates within legal bounds. But blocking AI agents from illegal attacks on democracy is a critical first step.

The EU’s Code of Practice on General-Purpose AI already recognizes this danger and identifies “lawlessness” as a model propensity that contributes to systemic risk. But just as we currently lack reliable methods to assess how persuasive AI systems are, we currently lack a way to reliably measure AI lawlessness.

And perhaps most concerningly—and this brings me to my second proposal—the AI Office currently lacks the institutional capacity to develop these crucial capabilities.

The AI Office needs sufficient technical, policy, and legal staff to rigorously analyze what companies submit under the Code of Practice and AI Act—to scrutinize their risk assessments, verify their mitigation measures, and spot gaps in their safety evaluations. In other words: When a company claims their AI agent is law-following, the AI Office must have the expertise and resources to independently test that claim. When developers report on persuasion capabilities—capabilities that even they may not fully understand—the AI Office needs experts who can identify what’s missing from those reports.

Rigorous evaluation isn’t just about compliance—it’s about how we learn: each assessment and each gap we identify builds our understanding of these systems. This is why adequate AI Office capacity matters: not just for evaluating persuasion capabilities or Law-following AI today, but for understanding and preparing for risks to democracy that grow with each model release.

To illustrate what the current resource gap looks like: Recent reports suggest Meta offered one AI researcher a salary package of  €190 million. The AI Office—tasked with overseeing the entire industry—operates on less.

This gap between private power and public capacity is unsustainable for our democracy. If we’re serious about democracy, we must fund our institutions accordingly.

So to protect democracy, we can start with two things: AI agents bound by human laws, and an AI Office with the capacity to understand and evaluate the risks.

Thank you.

The full video can be watched here (starts 12:01:02).

Share
Christoph Winter’s remarks to the European Parliament on AI Agents and Democracy
Full text PDFs
Christoph Winter’s remarks to the European Parliament on AI Agents and Democracy