Oscar Delaney Oscar Delaney

Mapping Technical Safety Research at AI Companies

This report analyzes the research published by Anthropic, Google DeepMind, and OpenAI about safe AI development, as well as corporate incentives to research different areas. This research reveals where corporate attention is concentrated and where there are potential gaps.

Read More
Issue Brief Jam Kraprayoon Issue Brief Jam Kraprayoon

An Early Warning System For AI-Powered Threats To National Security And Public Safety

This policy memo by Jam Kraprayoon, Joe O’Brien, and Shaun Ee (IAPS), published by the Federation of American Scientists, proposes that Congress should set up an early warning system for novel AI-enabled threats to provide defenders maximal time to respond to a given capability before information about it is disclosed or leaked to the public.

Read More
Research Report Joe O'Brien Research Report Joe O'Brien

Coordinated Disclosure of Dual-Use Capabilities: An Early Warning System for Advanced AI

Future AI systems may be capable of enabling offensive cyber operations, lowering the barrier to entry for designing and synthesizing bioweapons, and other high-consequence dual-use applications. If and when these capabilities are discovered, who should know first, and how? We describe a process for information-sharing on dual-use capabilities and make recommendations for governments and industry to develop this process.

Read More
Link Post Asher Brass Link Post Asher Brass

Responsible Reporting for Frontier AI Development

Mitigating the risks from frontier AI systems requires up-to-date and reliable information about those systems. Organizations that develop and deploy frontier systems have significant access to such information. By reporting safety-critical information to actors in government, industry, and civil society, these organizations could improve visibility into new and emerging risks posed by frontier systems.

Read More