The future of international scientific assessments of AI’s risks
Abstract
Effective international coordination to address AI’s global impacts demands a shared, scientifically rigorous understanding of AI risks. This paper examines the challenges and opportunities in establishing international scientific consensus in this domain. It analyzes current efforts, including the UK-led International Scientific Report on the Safety of Advanced AI and emerging UN initiatives, identifying key limitations and tradeoffs. The authors propose a two-track approach: 1) a UN-led process focusing on broad AI issues and engaging member states, and 2) an independent annual report specifically focused on advanced AI risks. The paper recommends careful coordination between these efforts to leverage their respective strengths while maintaining their independence. It also evaluates potential hosts for the independent report, including the network of AI Safety Institutes, the OECD, and scientific organizations like the International Science Council. The proposed framework aims to balance scientific rigor, political legitimacy, and timely action to facilitate coordinated international action on AI risks.