Technology to Secure the AI Chip Supply Chain: A Working Paper

This is a linkpost to a piece that Tao Burga, an IAPS fellow, co-authored with researchers from CNAS (Center for a New American Security).

Advanced artificial intelligence (AI) systems, built and deployed with specialized chips, show vast potential to drive economic growth and scientific progress. As this potential has grown, so has debate among U.S. policymakers about how best to limit emerging risks. In some cases, this concern has driven significant policy shifts, most notably through sweeping export controls on AI chips and semiconductor manufacturing equipment sold to China. However, AI-focused chip export controls are challenging to target well. Since chip exporters and officials at the U.S. Department of Commerce currently have no reliable means of understanding who is in possession of AI chips after they have been exported, today’s controls are applied in a blanket fashion, without regard to end use or end user. Furthermore, because AI chips and AI algorithms improve over time, the quantity and quality of AI hardware required to develop a model with a particular set of dangerous capabilities will decrease over time. This means that to fulfill their goals of limiting access to specific capabilities, AI export controls must steadily grow in scope, becoming ever more burdensome on exporters and end users. Today’s controls are also difficult to enforce using the current process. Enforcement relies on exporters checking buyers against an official roster of blacklisted organizations maintained by the Bureau of Industry and Security within the U.S. Department of Commerce. Evading this process is straightforward: shell companies can typically be set up online for a few thousand dollars in a matter of hours or days, whereas it can take years of investigation to uncover a shell company’s illicit activities and add them to the list.

At the same time, in the absence of export controls, ensuring that advanced AI technologies are not used for malicious purposes by state and nonstate adversaries could require an intrusive surveillance regime with deleterious consequences for U.S. economic competitiveness and the preservation of democratic values. As policymakers consider how to balance security, competitiveness, and a commitment to democratic values, there is growing interest in technological solutions that can strike a better trade-off between these objectives and keep pace with fast AI progress and the rapidly evolving security landscape. Hardware-enabled mechanisms (HEMs)—mechanisms built into data center AI hardware to serve specific security and governance objectives—have especially attracted interest as a promising new tool.

Previous
Previous

AI safety needs Southeast Asia’s expertise and engagement

Next
Next

Who should develop which AI evaluations?