Response to BIS RFC on Establishment of Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters
This post is a copy of IAPS’ response to a BIS request for public comment. It outlines ways to expand the role of other stakeholders in the reporting process for AI models and compute clusters, including third-party evaluators, civil society groups, and other public sector entities.
Response to the DOD RFI on Defense Industrial Base Adoption of Artificial Intelligence for Defense Applications
IAPS submitted a response to a Department of Defense Request for Information on Defense Industrial Base Adoption of AI for Defense Applications.
Mapping Technical Safety Research at AI Companies
This report analyzes the research published by Anthropic, Google DeepMind, and OpenAI about safe AI development, as well as corporate incentives to research different areas. This research reveals where corporate attention is concentrated and where there are potential gaps.
Assuring Growth: Making the UK a Global Leader in AI Assurance Technology
This policy memo by Jam Kraprayoon and Bill Anderson-Samways, published by the UKDayOne and the Social Market Foundation, recommends that the UK government implement a targeted market-shaping program mobilizing public and private sector investment to supercharge the UK’s AI assurance technology industry.
An Early Warning System For AI-Powered Threats To National Security And Public Safety
This policy memo by Jam Kraprayoon, Joe O’Brien, and Shaun Ee (IAPS), published by the Federation of American Scientists, proposes that Congress should set up an early warning system for novel AI-enabled threats to provide defenders maximal time to respond to a given capability before information about it is disclosed or leaked to the public.
Coordinated Disclosure of Dual-Use Capabilities: An Early Warning System for Advanced AI
Future AI systems may be capable of enabling offensive cyber operations, lowering the barrier to entry for designing and synthesizing bioweapons, and other high-consequence dual-use applications. If and when these capabilities are discovered, who should know first, and how? We describe a process for information-sharing on dual-use capabilities and make recommendations for governments and industry to develop this process.
Highlights for Responsible AI from the Biden Administration's FY2025 Budget Proposal
This issue brief analyzes key AI-related allocations from the Biden FY2025 Presidential Budget in terms of their potential impact on the responsible development of advanced AI.
Responsible Reporting for Frontier AI Development
Mitigating the risks from frontier AI systems requires up-to-date and reliable information about those systems. Organizations that develop and deploy frontier systems have significant access to such information. By reporting safety-critical information to actors in government, industry, and civil society, these organizations could improve visibility into new and emerging risks posed by frontier systems.
AI-Relevant Regulatory Precedents: A Systematic Search Across All Federal Agencies
A systematic search for potential case studies relevant to advanced AI regulation in the United States, looking at all federal agencies for factors such as level of expertise, use of risk assessment, and analysis of uncertain phenomena.
Responsible Scaling: Comparing Government Guidance and Company Policy
This issue brief evaluates the original example of a Responsible Scaling Policy (RSP) – that of Anthropic – against guidance on responsible capability scaling from the UK Department for Science, Innovation and Technology (DSIT).
Response to the NIST RFI on Auditing, Evaluating, and Red-Teaming AI Systems
IAPS’s response to a NIST RFI, outlining specific guidelines and practices that could help AI actors better manage and mitigate risks from AI systems, particularly from dual-use foundation models.
Catching Bugs: The Federal Select Agent Program and Lessons for AI Regulation
This paper examines the Federal Select Agent Program, the linchpin of US biosecurity regulations. It then draws out lessons for AI regulation regarding licensing, regulatory expertise, and the merits of “risk-based” vs. “list-based” systems.
Towards Publicly Accountable Frontier LLMs: Building an External Scrutiny Ecosystem under the ASPIRE Framework
This paper discusses how external scrutiny (such as third-party auditing, red-teaming, and researcher access) can bring public accountability to bear on decisions regarding the development and deployment of frontier AI models.
Adapting Cybersecurity Frameworks to Manage Frontier AI Risks: a Defense-in-Depth Approach
The complex and evolving threat landscape of frontier AI development requires a multi-layered approach to risk management (“defense-in-depth”). By reviewing cybersecurity and AI frameworks, we outline three approaches that can help identify gaps in the management of AI-related risks.
Open-Sourcing Highly Capable Foundation Models
This paper, led by the Centre for the Governance of AI, evaluates the risks and benefits of open-sourcing, as well as alternative methods for pursuing open-source objectives.
Deployment Corrections: An Incident Response Framework for Frontier AI Models
This report describes a toolkit that frontier AI developers can use to respond to risks discovered after deployment of a model. We also provide a framework for AI developers to prepare and implement this toolkit.