Assuring Growth: Making the UK a Global Leader in AI Assurance Technology
This policy memo by Jam Kraprayoon and Bill Anderson-Samways, published by the UKDayOne and the Social Market Foundation, recommends that the UK government implement a targeted market-shaping program mobilizing public and private sector investment to supercharge the UK’s AI assurance technology industry.
Spreadsheets vs. Smugglers: Modernizing the BIS for an Era of Tech Rivalry
This blog post by Erich Grunewald (IAPS) and Samuel Hammond (the Foundation for American Innovation) argues that Congress should increase the funding of the Bureau of Industry and Security.
Highlights for Responsible AI from the Biden Administration's FY2025 Budget Proposal
This issue brief analyzes key AI-related allocations from the Biden FY2025 Presidential Budget in terms of their potential impact on the responsible development of advanced AI.
Responsible Reporting for Frontier AI Development
Mitigating the risks from frontier AI systems requires up-to-date and reliable information about those systems. Organizations that develop and deploy frontier systems have significant access to such information. By reporting safety-critical information to actors in government, industry, and civil society, these organizations could improve visibility into new and emerging risks posed by frontier systems.
Federal Drive with Tom Temin podcast interview: Onni Aarne on AI hardware security risks
On this episode of the Federal Drive with Tom Temin, IAPS consultant Onni Aarne discusses how specialized AI chips, and the systems that use them, need protection from theft and misuse. The podcast episode and interview transcript are available on the Federal News Network.
Secure, Governable Chips
Today, the Center for a New American Security (CNAS), in collaboration with the Institute for AI Policy and Strategy, has released a new report, Secure, Governable Chips, by Onni Aarne, Tim Fist, and Caleb Withers.
The report introduces the concept of “on-chip governance,” detailing how security features on AI chips could help mitigate national security risks from the development of broadly capable dual-use AI systems, while protecting user privacy.
Towards Publicly Accountable Frontier LLMs: Building an External Scrutiny Ecosystem under the ASPIRE Framework
This paper discusses how external scrutiny (such as third-party auditing, red-teaming, and researcher access) can bring public accountability to bear on decisions regarding the development and deployment of frontier AI models.
Preventing AI Chip Smuggling to China
We link to a working paper which was led by Tim Fist of the Center for a New American Security, and coauthored with IAPS researcher Erich Grunewald. It builds on IAPS's earlier report on AI chip smuggling into China.
Managing AI Risks in an Era of Rapid Progress
This paper discusses risks from future AI systems and proposes priorities for AI R&D and governance. Its many authors include an IAPS researcher, Turing Prize winners, and a Nobel Memorial Prize winner.
How Expertise in AI hardware Can Help with AI Governance
This article was written for the organization 80,000 Hours by an IAPS researcher. It discusses why and how it may be valuable to build expertise in AI hardware and use that expertise to reduce risks and improve governance decisions.
Open-Sourcing Highly Capable Foundation Models
This paper, led by the Centre for the Governance of AI, evaluates the risks and benefits of open-sourcing, as well as alternative methods for pursuing open-source objectives.