Who should develop which AI evaluations?
This paper, published by the Oxford Martin AI Governance Initiative, explores how to determine which actors are best suited to develop AI model evaluations. IAPS staff Renan Araujo, Oliver Guest, and Joe O’Brien were among the co-authors.
The Future of the AI Summit Series
This is a link post for a paper which was led by researchers from the Oxford Martin AI Governance Initiative, and on which IAPS researcher Oliver Guest was one of the authors.
Bridging the Artificial Intelligence Governance Gap: The United States' and China's Divergent Approaches to Governing General-Purpose Artificial Intelligence
A look at U.S. and Chinese policy landscapes reveals differences in how the two countries approach the governance of general-purpose artificial intelligence. Three areas of divergence are notable for policymakers: the focus of domestic AI regulation, key principles of domestic AI regulation, and approaches to implementing international AI governance.
Key questions for the International Network of AI Safety Institutes
In this commentary, we explore key questions for the International Network of AI Safety Institutes and suggest ways forward given the upcoming San Francisco convening on November 20-21, 2024. What should the network work on? How should it be structured in terms of membership and central coordination? How should it fit into the international governance landscape?
Chinese AI Safety Institute Counterparts
Based on a systematic review of open sources, we identify Chinese “AISI counterparts,” i.e. Chinese institutions doing similar work to the US and UK AISIs and that have relatively close government links.
Understanding the First Wave of AI Safety Institutes: Characteristics, Functions, and Challenges
AI Safety Institutes (AISIs) are a new institutional model for AI governance that has expanded across the globe. In this primer, we analyze the “first wave” of AISIs: the shared fundamental characteristics and functions of the institutions established by the UK, the US, and Japan that are governmental, technical, with a clear mandate to govern the safety of advanced AI systems.
The Future of International Scientific Assessments of AI’s Risks
This piece is a link post for a paper which was led by Hadrien Pouget (Carnegie Endowment for International Peace) and Claire Dennis (Centre for the Governance of AI). IAPS staff Renan Araujo and Oliver Guest were among the paper’s co-authors.
Topics for Track IIs: What Can Be Discussed in Dialogues About Advanced AI Risks Without Leaking Sensitive Information?
This issue brief suggests agenda items for dialogues about advanced AI risks that minimize risk of leaking sensitive information.
Safeguarding the Safeguards: How Best to Promote Alignment in the Public Interest
With this paper, we aim to help actors who support alignment efforts to make these efforts as effective as possible, and to avoid potential adverse effects.
International AI Safety Dialogues: Benefits, Risks, and Best Practices
Events that bring together international stakeholders to discuss AI safety are a promising way to reduce AI risks. This report recommends ways to make these events a success.