Coordinated Disclosure of Dual-Use Capabilities: An Early Warning System for Advanced AI
Abstract
Advanced AI systems may be developed which exhibit capabilities that present significant risks to public safety or security. Conversely, they may also exhibit capabilities that may be applied to defend against significant risks in a wide set of domains, including (but not limited to) developing societal resilience against AI threats.
We propose Coordinated Disclosure of Dual-Use Capabilities (CDDC) as a process to guide early information-sharing between advanced AI developers, US government agencies, and other private sector actors about these capabilities. The process centers around an information clearinghouse (the “coordinator”) which receives evidence of dual-use capabilities from finders via mandatory and/or voluntary reporting pathways, and passes noteworthy reports to defenders for follow-up (i.e., further analysis and response). This aims to provide the US government, dual-use foundation model (DUFM) developers, and other defense-relevant actors with a comprehensive overview of AI capabilities that could significantly impact public safety and security, as well as maximal time to respond and implement countermeasures.
We make several recommendations:
Congress should assign a coordinator within the US government to receive and distribute reports on dual-use AI capabilities (“DUC reports”), and to develop legal clarity and infrastructure to facilitate reporting from outside government. This should be paired with strengthened reporting requirements for DUCs.
Either the President via Executive Order or Congress via legislation should assign agency leads for working groups of “defender” agencies—agencies that receive DUC reports from the coordinator and act on them.
Congress should fund the US AI Safety Institute to build capacity for wider government involvement in model evaluations (by enabling agencies to directly perform evaluations, or audit or otherwise be involved in company-run evaluations).
National Institute of Standards and Technology (NIST) or alternatively, a non-governmental organization such as Carnegie Mellon University’s Software Engineering Institute (CMU SEI) or the Frontier Model Forum (FMF) should lead efforts with AI developers, relevant agencies, and third parties to develop common language for DUC reporting and triage.
DUFM developers should establish clear policies and intake procedures for independent researchers reporting dual-use capabilities, based on Vulnerability Reporting Policies.
DUFM developers should create and maintain incident response plans for DUCs and build working relationships with defenders in government, other AI companies, and other relevant non-governmental organizations.
DUFM developers should collaborate with working groups (once such groups are developed) to identify capabilities that could help defenders, which can be shared via the CDDC infrastructure.
Note: This report is limited in scope to reporting in the context of US companies, US law, and the US government. There is additional work to be done to re-scope the considerations in this report for other jurisdictions and international considerations.