AI Companies’ Safety Research Leaves Important Gaps. Governments and Philanthropists Should Fill Them.
This is a linkpost for an article written by IAPS researchers Oscar Delaney and Oliver Guest.
AI safety needs Southeast Asia’s expertise and engagement
This is a link post for an article for the Brookings Institution written by IAPS researchers Shaun Ee and Jam Kraprayoon.
Technology to Secure the AI Chip Supply Chain: A Working Paper
This is a linkpost to a piece that Tao Burga, an IAPS fellow, co-authored with researchers from CNAS (Center for a New American Security).
Who should develop which AI evaluations?
This paper, published by the Oxford Martin AI Governance Initiative, explores how to determine which actors are best suited to develop AI model evaluations. IAPS staff Renan Araujo, Oliver Guest, and Joe O’Brien were among the co-authors.
The Future of the AI Summit Series
This is a link post for a paper which was led by researchers from the Oxford Martin AI Governance Initiative, and on which IAPS researcher Oliver Guest was one of the authors.
Bridging the Artificial Intelligence Governance Gap: The United States' and China's Divergent Approaches to Governing General-Purpose Artificial Intelligence
A look at U.S. and Chinese policy landscapes reveals differences in how the two countries approach the governance of general-purpose artificial intelligence. Three areas of divergence are notable for policymakers: the focus of domestic AI regulation, key principles of domestic AI regulation, and approaches to implementing international AI governance.
Key questions for the International Network of AI Safety Institutes
In this commentary, we explore key questions for the International Network of AI Safety Institutes and suggest ways forward given the upcoming San Francisco convening on November 20-21, 2024. What should the network work on? How should it be structured in terms of membership and central coordination? How should it fit into the international governance landscape?
Chinese AI Safety Institute Counterparts
Based on a systematic review of open sources, we identify Chinese “AISI counterparts,” i.e. Chinese institutions doing similar work to the US and UK AISIs and that have relatively close government links.
Response to BIS RFC on Establishment of Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters
This post is a copy of IAPS’ response to a BIS request for public comment. It outlines ways to expand the role of other stakeholders in the reporting process for AI models and compute clusters, including third-party evaluators, civil society groups, and other public sector entities.
Understanding the First Wave of AI Safety Institutes: Characteristics, Functions, and Challenges
AI Safety Institutes (AISIs) are a new institutional model for AI governance that has expanded across the globe. In this primer, we analyze the “first wave” of AISIs: the shared fundamental characteristics and functions of the institutions established by the UK, the US, and Japan that are governmental, technical, with a clear mandate to govern the safety of advanced AI systems.
Response to the RFC on U.S. Artificial Intelligence Safety Institute's AI-800-1 Draft Document
IAPS submitted a response to a National Institute of Standards and Technology (NIST) Request for Comment, outlining practices that could help AI developers better manage and mitigate misuse risks from dual-use foundation models.
Response to the DOD RFI on Defense Industrial Base Adoption of Artificial Intelligence for Defense Applications
IAPS submitted a response to a Department of Defense Request for Information on Defense Industrial Base Adoption of AI for Defense Applications.
Mapping Technical Safety Research at AI Companies
This report analyzes the research published by Anthropic, Google DeepMind, and OpenAI about safe AI development, as well as corporate incentives to research different areas. This research reveals where corporate attention is concentrated and where there are potential gaps.
The Future of International Scientific Assessments of AI’s Risks
This piece is a link post for a paper which was led by Hadrien Pouget (Carnegie Endowment for International Peace) and Claire Dennis (Centre for the Governance of AI). IAPS staff Renan Araujo and Oliver Guest were among the paper’s co-authors.
Assuring Growth: Making the UK a Global Leader in AI Assurance Technology
This policy memo by Jam Kraprayoon and Bill Anderson-Samways, published by the UKDayOne and the Social Market Foundation, recommends that the UK government implement a targeted market-shaping program mobilizing public and private sector investment to supercharge the UK’s AI assurance technology industry.
An Early Warning System For AI-Powered Threats To National Security And Public Safety
This policy memo by Jam Kraprayoon, Joe O’Brien, and Shaun Ee (IAPS), published by the Federation of American Scientists, proposes that Congress should set up an early warning system for novel AI-enabled threats to provide defenders maximal time to respond to a given capability before information about it is disclosed or leaked to the public.
Coordinated Disclosure of Dual-Use Capabilities: An Early Warning System for Advanced AI
Future AI systems may be capable of enabling offensive cyber operations, lowering the barrier to entry for designing and synthesizing bioweapons, and other high-consequence dual-use applications. If and when these capabilities are discovered, who should know first, and how? We describe a process for information-sharing on dual-use capabilities and make recommendations for governments and industry to develop this process.