Renan Araujo Renan Araujo

Who should develop which AI evaluations?

This paper, published by the Oxford Martin AI Governance Initiative, explores how to determine which actors are best suited to develop AI model evaluations. IAPS staff Renan Araujo, Oliver Guest, and Joe O’Brien were among the co-authors.

Read More
Oliver Guest Oliver Guest

The Future of the AI Summit Series

This is a link post for a paper which was led by researchers from the Oxford Martin AI Governance Initiative, and on which IAPS researcher Oliver Guest was one of the authors.

Read More
Oliver Guest Oliver Guest

Bridging the Artificial Intelligence Governance Gap: The United States' and China's Divergent Approaches to Governing General-Purpose Artificial Intelligence

A look at U.S. and Chinese policy landscapes reveals differences in how the two countries approach the governance of general-purpose artificial intelligence. Three areas of divergence are notable for policymakers: the focus of domestic AI regulation, key principles of domestic AI regulation, and approaches to implementing international AI governance.

Read More
Commentary Sumaya Nur Adan Commentary Sumaya Nur Adan

Key questions for the International Network of AI Safety Institutes

In this commentary, we explore key questions for the International Network of AI Safety Institutes and suggest ways forward given the upcoming San Francisco convening on November 20-21, 2024. What should the network work on? How should it be structured in terms of membership and central coordination? How should it fit into the international governance landscape?

Read More
Renan Araujo Renan Araujo

Understanding the First Wave of AI Safety Institutes: Characteristics, Functions, and Challenges

AI Safety Institutes (AISIs) are a new institutional model for AI governance that has expanded across the globe. In this primer, we analyze the “first wave” of AISIs: the shared fundamental characteristics and functions of the institutions established by the UK, the US, and Japan that are governmental, technical, with a clear mandate to govern the safety of advanced AI systems.

Read More
Oscar Delaney Oscar Delaney

Mapping Technical Safety Research at AI Companies

This report analyzes the research published by Anthropic, Google DeepMind, and OpenAI about safe AI development, as well as corporate incentives to research different areas. This research reveals where corporate attention is concentrated and where there are potential gaps.

Read More
Issue Brief Jam Kraprayoon Issue Brief Jam Kraprayoon

An Early Warning System For AI-Powered Threats To National Security And Public Safety

This policy memo by Jam Kraprayoon, Joe O’Brien, and Shaun Ee (IAPS), published by the Federation of American Scientists, proposes that Congress should set up an early warning system for novel AI-enabled threats to provide defenders maximal time to respond to a given capability before information about it is disclosed or leaked to the public.

Read More
Research Report Joe O'Brien Research Report Joe O'Brien

Coordinated Disclosure of Dual-Use Capabilities: An Early Warning System for Advanced AI

Future AI systems may be capable of enabling offensive cyber operations, lowering the barrier to entry for designing and synthesizing bioweapons, and other high-consequence dual-use applications. If and when these capabilities are discovered, who should know first, and how? We describe a process for information-sharing on dual-use capabilities and make recommendations for governments and industry to develop this process.

Read More