Key questions for the International Network of AI Safety Institutes

Authors: Sumaya Nur Adan, Oliver Guest, Renan Araujo

Executive Summary

In this commentary piece, we explore key questions for the International Network of AI Safety Institutes and suggest ways forward given the upcoming San Francisco convening on November 20-21, 2024.

What kinds of work should the Network prioritize?

The network should prioritize topics that are urgent and important for AI safety, align well with AISIs’ competencies, and are elevated by collaboration, leading to more than the sum of each individual AISI. Examples include:

  • Standards: Members should work towards consensus on some safety-relevant practices. We particularly suggest safety frameworks; many AI companies committed at the Seoul AI Summit to publish such frameworks, but there is limited consensus so far on what constitutes one.

  • Information sharing: Members should identify kinds of information that should be shared between them and develop mechanisms to do so.

  • Evaluations: Members should continue working on evaluations and share best practices for safety evaluations with each other and collaborate to improve them.

What should the structure of the Network be?

We suggest a tiered membership structure:

  • Core members: Countries with established AI Safety Institutes or equivalent national bodies with full decision-making powers and contributions to the functioning of the Network.

  • Associate members: Countries that have a nascent focus on AI safety with access to select shared resources and contributing via working groups.

  • Observer members: Other countries, international organizations, academic institutions, and companies with relevant expertise. Participation in project-specific working groups providing expert input.


A secretariat would provide central functions. It would involve different positions:

  • Permanent positions: A small permanent component comprising representatives from countries with the most institutionally developed AI Safety Institutes. (This would probably be the UK and US AISIs given the current landscape.)

  • Rotating positions: Additional rotating positions filled by other core members on a temporary basis (for example, if the US and the UK formed the permanent component, they could be joined by 1-3 rotating members.)

  • Additional input from affiliate and observer members: Input mechanisms for the Network's various membership tiers and working groups to maintain the collaborative nature of the Network rather than creating a consolidated decision-making body.


Challenges the network may face with its membership include:

  • Inclusion of China: The United States and allies might be reluctant to include the Chinese government, despite China’s relevance for frontier AI development and apparent willingness to engage in international AI governance efforts. One way to balance these considerations could be to include specific Chinese actors as associate or observer members. Participating Chinese actors could include prominent individuals, university-based centers, and organizations conducting technical research on AI safety.

  • Inclusion of AI companies: AI companies are highly relevant for frontier AI development, but their inclusion might cause real or perceived industry capture. One way forward could be granting AI companies only observer membership, limiting their role to engagement with specific working groups.

How should the Network promote cooperation?

  • Working groups: Working groups could be either standing or project-specific. Standing groups could continue indefinitely and focus on the priority topics of the Network, such as the ones by the US AISI’s consortium. Project-specific groups could have a fixed duration and provide a space for experimenting with the inclusion of new associate and observer members.

  • Joint funding mechanisms: these could help the Network establish a secretariat, strengthen continuity of efforts, and support funding for working groups with members that lack the resources, boosting inclusivity.

  • Collaborate with other international AI governance forums: various efforts are already underway, ranging from the Bletchley Summit successors to UN-led processes. The Network could provide technical expertise and speedier processes to these forums, while benefiting from the legitimacy and inclusivity they provide. Examples could include scientific consensus-building efforts, such as the UN Independent Scientific Panel and the International Scientific Report on the Safety of Advanced AI.


Table of Contents

  • Executive Summary

    • What kinds of work should the Network prioritize?

    • What should the structure of the Network be?

    • How should the Network promote cooperation?

  • Introduction

  • 1 What kinds of work should the Network prioritize?

    • 1.1 Standards

    • 1.2 Information sharing

    • 1.3 Evaluations

  • 2 What should the structure of the Network be?

    • 2.1 A tiered structure for the international network

    • 2.2 Possible inclusion of China

    • 2.3 Possible inclusion of AI companies

    • 2.4 Secretariat

  • 3 How should the Network promote cooperation?

    • 3.1 Working groups

    • 3.2 Joint funding mechanisms

    • 3.Work together with other international AI governance forums

  • Conclusion

Introduction

At the Seoul AI Summit in May 2024, various countries agreed to create an “International Network of AI Safety Institutes”. The Network is intended to bring together AI Safety Institutes (AISIs) and similar institutions, with the first meeting scheduled for November 2024 in San Francisco.

Confirmed attendees are Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States. (This list closely overlaps with the Seoul signatories, though Kenya has been added, and Germany and Italy are only participating via the inclusion of the EU as a whole.)

Many fundamental aspects of the Network seem yet to be decided. In this commentary, we describe three key questions that the Network’s members will need to consider at San Francisco and beyond, and suggest some ways forward:

  1. What kinds of work should the Network prioritize?

  2. What should the institutional structure of the Network be?

  3. How can the Network best promote cooperation on AI safety?

1 What kinds of work should the Network prioritize?

There are many topics where AISIs could collaborate via the Network; they will need to prioritize. Indeed, prioritization itself is a stated priority of the San Francisco meeting. Here are some prioritization criteria that could be used by the Network:

Table 1. Criteria for work the Network of AISIs should prioritize

Criterion Description Example
Urgency and potential impact How important is the topic to AI safety, particularly in the near term? Considering AISIs are governmental bodies, how important is the topic for government action on AI safety? Developing solid safety evaluations could unlock technical standards, and reassure companies about quicker safe AI development.
Alignment with AISIs’ competencies AISIs are far from the only parts of government that focus on AI, but are unique in being particularly technical and safety-focused. Collaboration specifically between AISIs should focus on AI challenges where AISIs in particular can make a contribution. Standards are a technical field where AISIs could help make progress faster than traditional standards developing organizations, while ensuring a high quality.
Elevated by collaboration across AISIs Is this a topic where multiple AISIs working together would create a result that is more than the sum of its parts? Various types of information sharing, such as institutional learnings, risk thresholds, and incident definitions, could catalyze AISIs’ work and improve international coordination on AI safety.

At Seoul, members agreed on several areas in which they could collaborate. We highlight three that meet the above criteria particularly well: evaluations, standards, and information-sharing.

1.1 Standards

Although the Network is not a standard-setting body, it could be a valuable forum for creating consensus about international AI safety practices. Increasing consensus would disseminate these best practices and contribute to more formal standard-setting processes.

A particularly promising opportunity for greater consensus is safety frameworks. Various AI companies made “Frontier AI Safety Commitments” at Seoul, including publishing safety frameworks focusing on severe risks. These frameworks would articulate what kinds of risk-reduction measures the company would implement in different scenarios, such as with different levels of AI capabilities. (The participating companies agreed to publish their frameworks by the French AI Action Summit, scheduled for February 2025. To our knowledge, three companies have published so far: Anthropic, Google DeepMind, and OpenAI.)

By developing technical consensus around safety frameworks, the Network could help establish clearer standards for them across jurisdictions. AISIs' technical expertise would be particularly valuable for empirically assessing different safety measures - for instance, evaluating their effectiveness at reducing specific risks, defining comprehensive safety processes, and developing methods to verify that commitments are being implemented. This technical groundwork could help create stronger expectations for robust safety policies, making it easier to distinguish substantive frameworks from superficial ones. If safety frameworks eventually inform policy decisions, consensus on what constitutes a good safety framework could also reduce regulatory divergence between jurisdictions. 

(A further question about safety frameworks is what level of risk is acceptable from AI development; this is a more normative than empirical decision so best left outside the scope of the Network.)

Other promising areas for standards have been mapped by NIST in their Plan for Global Engagement on AI Standards. Among their "urgently needed and ready for standardization" topics, NIST includes:

  • Terminology and taxonomy of AI concepts

  • Measurement methods and metrics

  • Mechanisms for enhancing awareness and transparency about the origins of digital content

  • Risk-based management of AI systems

  • Security and privacy

  • Transparency among AI actors about system and data characteristics

  • Training data practices

  • Incident response and recovery plans

1.2 Information sharing

One stated goal of the Network is to promote information sharing between AISIs. This raises at least two questions: what information should be shared, and how?

There are difficult trade-offs between sharing information and the challenges associated with it. On the one hand, information sharing brings benefits, such as promoting cooperation and building trust. On the other hand, individual AISIs might be reluctant to share information that others could use to build more powerful AI systems, in case this information is used by countries that see each other as rivals or simply causes a competitive disadvantage to the AI companies in that AISI’s country. Examples of such information that AISIs might prefer to withhold could include details of the architecture of systems that the AISI is evaluating, or techniques to better elicit capabilities from models as part of evaluations.

On the other hand, there are types of information that involve very low risks if shared.

Table 2. Examples of shareable information across AISIs

Selected examples, adapted from Thurnherr (forthcoming).

Type of information Examples
Institutional learnings and strategy How did our country set up our AISI? What are we planning to achieve?
Evaluation standards Which models should be tested? What should they be evaluated for?
Evaluation infrastructure Software to conduct evaluations
AI incident definitions What constitutes an AI incident worthy of attention?

It would likely also be beneficial for AISIs to share some results from their safety evaluation efforts with each other. This would help AISIs to have a well-informed sense of the risk landscape, while minimizing duplication of effort. It would also allow AISIs to be well-informed even if only some of them have access to models for testing. However, there are tradeoffs:

  • AI companies might be less willing to make voluntary agreements with an AISI to facilitate testing, if they know that testing results will be more widely shared.

  • Some evaluation results will have implications for national security. For example, evaluations might assess AI models’ ability to assist with cyberattacks. Governments might be reluctant to share information that touches on national security with some or all other countries in the Network.

AISIs could share much of the information that we describe through informal mechanisms and bilateral agreements. However, the Network should also explore setting up more formalized information channels. The San Francisco meeting would be an obvious place to agree on mechanisms for sharing information ‘horizontally’ between AISIs. In the future, a more institutionalized Network could have a body that acts as an information clearing house. One example of formalized information-sharing mechanisms might be providing secure information-sharing protocols, such as an analogous role played by the Financial Action Task Force (FATF)’s in facilitating secure information exchange between financial intelligence units. Such protocols would be particularly valuable for sharing technical details about AI systems while managing commercial and security concerns, and could be helpful to avoid duplication of work on evaluations. Making progress towards the technical operationalization of such protocols would be a relevant development for the Network's meeting in November.

1.3 Evaluations

AI safety evaluations, hereafter evaluations, are empirical assessments that test AI systems to understand their behavior and capabilities on relevant risks, such as cyber, chemical, and biological misuse. These assessments have been at the core of many AISIs. The Network presents an opportunity to build on this foundation through enhanced collaboration.

Evaluations remain a crucial topic for AISIs. Evaluations are a core component of existing AI risk management procedures, such as frontier safety commitments. AISIs are also uniquely positioned to conduct some kinds of evaluations. For example, AISIs’ combination of technical expertise and connection to government makes them well-suited to carrying out evaluations that involve national security considerations. At the same time, evaluations are far from “solved”. For example, current methods are insufficient for assessing whether AI systems could act autonomously in unexpected ways, and there is still uncertainty about how well evaluations results generalize from test settings to the real world.

Collaboration through the Network could strengthen evaluation work in several ways. First, through improved technical collaboration, AISIs may be able to make progress more quickly on methodological challenges, particularly for emerging capabilities that require novel evaluation approaches. Second, collaboration on evaluation procedures would increase interoperability, making it easier for individual jurisdictions to perform evaluations, and for AI companies to work with evaluators in different countries. Third, it could help establish shared expectations across jurisdictions about what constitutes adequate safety evaluation practices and appropriate safety thresholds. This would make it easier for countries to coordinate their oversight of AI systems, and for companies to comply with requirements from different governments.

The Network should consider how different types of evaluation work may require different levels of collaboration. For instance, methodologies and best practices for evaluating national security risks might be best shared only among AISIs with existing intelligence-sharing relationships and appropriate security infrastructure, as these evaluations often involve classified information and sensitive capabilities that could be misused if they became more widely known. Other aspects of safety evaluation development could involve broader collaboration across the full Network.

2 What should the structure of the Network be?

The press release about the San Francisco meeting hints at a desire to expand this membership, referring to the ten as the “initial” members (Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States). There are indeed good reasons to broaden Network membership:

  • Unsafe AI systems may pose risks to which no country is immune, such as autonomous cyberattacks and CBRN threats, suggesting that there are benefits to global coordination. (That said, it might be a wasteful duplication of resources for every country to create an AISI in order to join the Network. As others have suggested, one way to balance these considerations might be for many countries to be represented via regional-level AISIs.)

  • Current members can strengthen global AI safety by expanding Network participation. A more inclusive Network would help establish consistent safety standards across jurisdictions and reduce the likelihood of regulatory fragmentation. This is particularly important given that several countries with AI development close to the frontier—most notably China, but also others, such as the UAE—are not currently members. However, expansion should be carefully managed to ensure new members contribute meaningfully to the Network's mission.

  • Broader participation could increase both resources and legitimacy. While new members may have less technical AI safety expertise on average than current ones, their participation could still provide valuable additional funding, personnel, and facilities for safety work. Including diverse viewpoints from both state and non-state actors could also help identify potential safety issues that might be overlooked from a narrower vantage point. Moreover, broader participation would enhance the Network's legitimacy and further its credibility as an  international body working to address global AI safety challenges.

However, expanding membership could also have challenges. These include:

  • Tradeoffs between including more countries and being able to work on cutting-edge AI safety topics. For example, additional countries might have less AI safety expertise or capacity, or weaker cybersecurity standards.

  • Reluctance from existing members to engage with countries they perceive as rivals, even if these countries are important players for AI development. China is the clearest case where this could be an issue.

  • Difficulties around including non-state actors. For example, AISIs are already sometimes criticized for being too close to AI companies.


A promising way to manage these tradeoffs is tiered membership, where various stakeholders can be part of the Network but to different degrees. We describe below one vision of what tiered membership could look like. We then discuss in more detail potential relationships between the Network and China, AI companies, and civil society.

2.1 A tiered structure for the international network

International forums often use tiered membership structures to get the best of both worlds between including more stakeholders and maintaining the ability to get things done. Examples of institutions with tiered membership structures abound, ranging from sector-specific bodies such as the Financial Action Task Force (FATF) and the Global Partnership on AI (GPAI) to broader organizations such as the International Organization for Standardization (ISO) and the World Trade Organization. Here, we sketch out one vision of what a tiered membership could look like for the International Network:

Table 3. Tiered membership structure for the Network of AISIs

Membership Tier Inclusion criteria Rights Responsibilities
Core Member Countries with established AI Safety Institutes or equivalent national bodies. Full voting rights on network decisions, leadership of working groups, access to all shared resources. Active participation in all network activities, contributions to the functioning of the Network, leading working groups.
Associate Member Countries that have some focus on AI safety but whose government efforts on the topic are still nascent. Participation in working groups, access to select shared resources, observer status in decision-making processes. Active contribution to working groups, sharing of information relevant for shared resources.
Observer Member Other countries, international organizations, academic institutions, and companies with relevant expertise. Participation in specific working groups or projects, access to selected resources. Providing expert input, participation in project-specific working groups, supporting public engagement initiatives.

In more detail: 

Core members: This is for countries that have demonstrated a strong commitment to AI safety, such as by forming a well-resourced AISI or equivalent body. Core members have voting rights in decision-making processes within the Network. They can lead working groups (a concept that we discuss in more detail below) and have full access to any resources that are shared across the Network—due to that, they should also have strong cybersecurity measures in place. Core members are expected to contribute to the practical functioning of the Network, such as by hosting meetings.

Associate members: These are countries that have some focus on AI safety but whose government efforts on the topic are still nascent. Associate members can participate in working groups and access some resources. They can also participate in some decision-making processes of the Network but without voting rights. Associate membership status might also be a promising way to include countries that do have significant AI safety efforts but where core members are reluctant to work closely with them for other reasons, such as weaker cybersecurity standards or geopolitical reasons—we discuss this in the context of China below.

Observer membership: This is open to actors other than countries that are highly relevant to AI safety. Examples might include representatives of governments without active AI safety efforts, international AI governance forums, AI companies, civil society organizations, or perhaps even particularly relevant individuals. Observer members can participate in working groups and provide expert input as needed but have no formal involvement in decision-making. There are numerous actors that could in principle be included as observer members; core members will need to decide criteria to avoid the network suffering scope creep or becoming unwieldy from too many observers being admitted. 

As the Network currently has a low degree of institutionalization, members can be flexible when defining its structure. The Network currently does not have any formally articulated procedures for determining eligibility or making decisions – in fact, it is better conceptualized as a gathering of various different entities rather than itself an entity. By formalizing its structure more, the Network could strengthen the responsibility that members feel towards improving the Network, and accomplish more than the sum of its parts.

However, formalizing such structures also comes with challenges. One is how to approach the formal inclusion or exclusion of potential members – some countries, companies, or other stakeholders might feel like their newly formalized relationship does not match their expectations. Below, we will address two potentially challenging cases: the inclusion of China and AI companies.

2.2 Possible inclusion of China

The Network's expansion raises the question of whether it should include countries that are seen as rivals of existing members. This question is particularly pertinent for China given that it is an important player in frontier AI development and has shown a willingness to engage internationally on AI safety. Although there is no official national-level Chinese AISI, several Chinese institutions do AI safety work similar to AISIs elsewhere. However, key existing members of the Network—in particular, the United States—have raised concerns about the potential misuse of AI by China. 

The tiered structure of the network could help existing members to find a balanced approach, allowing cooperation on shared interests, AI safety being a shared interest,  by focusing on specific kinds of cooperation and avoiding complete inclusion or exclusion of China. Some possibilities:

China as an associate member: Including China but not as a core member could facilitate discussion and cooperation where there is mutual interest. At the same time, it would not commit core members to engage with China on topics where they have diverging views or interests, such as human rights topics. The UK’s Bletchley Park AI Safety Summit took a similar approach, with Chinese government representatives invited to some but not all meetings.

Chinese institutions or individuals as observer members: If it is undesirable to include the Chinese government as a whole, members could also consider including specific Chinese institutions as observers. Commerce Secretary Raimondo has hinted at taking this approach for the San Francisco meeting by inviting Chinese scientists. Recent IAPS research identifies specific institutions in China that might be suitable for collaborations with foreign AISIs. Possible counterparts include:

  • Prominent Chinese individuals working on AI safety: for example, there are four Chinese authors in the "Managing extreme AI risks amid rapid progress" consensus paper. Andrew Yao and Ya-Qin Zhang come from technical backgrounds, while Xue Lan and Qiqi Gao come from policy backgrounds.

  • University-based centers that are highly relevant to AI safety: for example, the Institute for AI International Governance (I-AIIG) at Tsinghua University. I-AIIG is a policy research institute focusing on policy research about (international) AI governance. The institute’s leadership has repeatedly spoken about being concerned about extreme AI risks. Activities from I-AIIG to promote international cooperation on AI safety and governance have included organizing the International Forum on AI Cooperation and Governance for Chinese and non-Chinese experts (the most recent Forum included a sub-event focusing on the safety of advanced AI) and participating in various track II diplomacy events relating to AI.

  • Organizations conducting technical research on AI safety, such as:

    • The Shanghai AI Lab is a government-backed AI research institution aimed at supporting the Chinese AI industry and contributing technical AI breakthroughs. Although safety is not its primary focus, it has done several high-quality safety projects. For example, It developed OpenCompass, a widely-used AI evaluations platform. It also developed SALAD-Bench, a safety benchmark that includes risks such as AI assisting users with biological, chemical, and cyber weapons.

    • The China Academy for Information and Communications Technology (CAICT) is a think tank housed within the Ministry of Industry and Information Technology. CAICT performs AI evaluations via its “Fangsheng” platform, and has published research on "large [AI] model governance" discussing various possible risks from such models. CAICT is housed within the Ministry of Industry and Information Technology so is more strongly linked to the Chinese government than the other possibilities previously mentioned.

2.3 Possible inclusion of AI companies

The Network will need to carefully consider its relationship with AI companies, particularly those developing frontier AI systems. National-level AISIs often have close relationships with AI companies because frontier AI development generally happens within industry. However, these relationships have raised some concerns about industry capture. It is also not obvious that close relationships should be duplicated at the level of the Network, even if they make sense at the national level.

We discuss two kinds of possible engagement between the Network and AI companies.

  • Recommended: Some collaboration in working groups, and;

  • Not recommended: Companies sharing sensitive information and pre-deployment access to models at the network level at this time.

Providing observer membership status to AI companies would allow them to participate in working groups, providing valuable expertise on some topics. Examples of possible working group topics where a lot of expertise would be concentrated within AI companies could include developing standardized evaluation methods, creating taxonomies for safety-relevant capabilities, and defining best practices for model documentation.

However, members will need to think carefully about how to preserve their independence while working with the companies. Some ways to do so could include:

  • Inviting AI companies specifically to working groups where their technical expertise is most needed and where there is a lower risk of capture. For example, a working group about the Network’s priorities should not necessarily include companies.

  • Structured participation mechanisms to limit the influence of companies on working groups, or make this influence transparent. For example, companies could only be invited to some meetings, or be required to participate primarily via formal consultation processes.

  • Requiring AI companies to demonstrate that they take safety seriously in order to be awarded observer status. For example, companies could be required to at minimum comply with the voluntary commitments made at the White House and Seoul. This would also provide an incentive for companies to make and keep such commitments.

Some AI companies currently share sensitive information with specific AISIs, including pre-deployment access to models, detailed technical specifications, and safety evaluation results. However, we would not currently advise attempting to replicate these arrangements at the level of the Network.

There would be benefits to achieving pre-deployment access at the Network level:

  • Allowing more AISIs to conduct pre-deployment safety evaluations.

  • Streamlining companies' engagement with multiple AISIs.

  • Creating more consistent practices across jurisdictions for matters like documentation requirements, safety testing protocols, and incident reporting procedures.

However, there would also be significant challenges to Network-level sharing that currently seem to outweigh the benefits:

  • The information AI companies share through these agreements is highly sensitive, and thus unsuitable for wide distribution, potentially even with some of the Network core members.

    • Besides proprietary concerns, detailed technical data about advanced AI systems could inadvertently lower the barriers to replication, allowing others to develop high-risk capabilities, for example.

    • The unintentional diffusion of such information could have serious national security implications, particularly if it reaches countries that see each other as  rivals or members with lower cybersecurity standards than the US and UK AISIs. 

  • Key stakeholders likely prefer maintaining bilateral arrangements

    • AI companies may want to retain discretion over information sharing rather than depend on Network membership structure, particularly given that Network members have varying regulatory powers. For example, the EU AI Office has significant regulatory authority while other members do not.

    • US and UK AISIs, which currently have the most developed company relationships, may be reluctant to duplicate these arrangements at the Network level

  • The Network lacks mechanisms to promote compliance

    • A key reason why AI companies have decided to voluntarily share information with the US and UK governments is likely that they are located in these jurisdictions, so more susceptible to pressure from these governments. The Network does not have equivalent mechanisms.

2.4 Secretariat

The Network does not currently have its own secretariat. Secretariat functions, like organizing convenings, sharing institutional advice, and facilitating information flow, are performed by the UK and US AI Safety Institutes (and sometimes specific government agencies within these countries, such as the US State Department). However, using the San Francisco convening as an opportunity to formalize central coordination functions for the network more would be quite productive.

Here, we describe a modest secretariat model that primarily facilitates technical cooperation between members—similar to the Network’s current functioning. There is precedent of secretariats of international organizations growing over time, starting with a smaller secretariat would not preclude the Network ending up with something more expansive as it becomes more consolidated.

The secretariat could be structured initially as:

  • Permanent positions: A small permanent component comprising representatives from countries with the most institutionally developed AI Safety Institutes. (This would probably be the UK and US AISIs given the current landscape.)

  • Rotating positions: Additional rotating positions filled by other core members on a temporary basis (for example, if the US and the UK formed the permanent component, they could be joined by 1-3 rotating members.)

  • Additional input from affiliate and observer members: Input mechanisms for the Network's various membership tiers and working groups to maintain the collaborative nature of the Network rather than creating a consolidated decision-making body.

As more countries develop robust AI Safety Institutes, this model could evolve from its initial static structure to incorporate more rotation in the secretariat. The key is to maintain the Network's fundamental character as a collaborative platform for technical cooperation, rather than establishing the secretariat as an independent power center in the early stages of the network.

This modest model would still offer several key advantages:

  • Enhanced capacity for coordinating technical cooperation between members

  • Support for essential Network activities while avoiding over-centralization

  • Creating a clear point of contact for engagement with other international fora

  • Maintaining flexibility to evolve as the Network's needs develop

This structure strikes a balance between providing necessary coordination functions and preserving the Network's fundamental character as a collaborative platform for technical cooperation between members.

There are other organizational structures the network may consider. For instance, there could be an entirely centralized model where the secretariat is housed within one AISI, such as the UK AISI, considering its resourcing, or the US AISI, given most advanced AI companies are housed there. The secretariat could also be housed within an international organization, such the OECD or the UN—an option originally discussed by The Future Society in an upcoming brief. There could also be an entirely rotating model, such as the G7. These options could be  worth exploring in the future, as the network becomes more consolidated.

3 How should the Network promote cooperation?

The key purpose of the Network is to facilitate international cooperation on AI Safety Science. How concretely can it do so? 

We suggest two initiatives for effective cooperation: working groups and joint funding mechanisms, and explore how working together with existing international forums could look like. These might benefit AI Safety Science, such as reducing duplicated effort and combining members’ complementary strengths.

3.1 Working groups

The Network already has ten members, and may gain more. As the number grows, cooperation becomes more challenging. Some members may prioritize specific topics more than others, or will have more expertise or capacity to work on them. Attempts at cooperation that do not account for this may end up settling for the lowest common denominator.

Implementing working groups could potentially mitigate many of these issues. These groups would be composed of subsets of the Network’s members. Participation would ideally be determined by the expertise and comparative advantages of each member. (Though participation could also benefit from a more inclusive format with access given to observer members that want to develop their expertise, for example.)

We suggest two types of working groups:

  • Standing working groups. These working groups would continue indefinitely and focus on stated priorities of the Network, such as monitoring AI harms and safety incidents. Topics for the standing working groups could also be drawn from the topics of the working groups of the US AISI’s consortium.

  • Project-based working groups. These groups would be time-limited, output-oriented, and based around specific projects, such as creating a particular kind of safety evaluation or compiling best practices for reducing a particular AI risk. Because these working groups are time-limited and would have narrower scopes, they might be particularly well-suited to including non-core members, such as Chinese research institutions and AI companies. If collaboration does not work out, these difficulties would only persist for the length of the project, and could inform inclusion or exclusion of members in future projects.

Opportunities for working groups to report on their progress could create a sense of urgency for their work. For example, representatives of working groups could present at subsequent meetings of the Network, or other forums, such as the successors the Bletchley Park AI Safety Summit.

3.2 Joint funding mechanisms

So far, the Network has had a low degree of institutionalization and has not required dedicated funding to function. However, joint funding mechanisms could allow the network to go beyond the sum of its parts. 

For example, the Network does not yet have a secretariat. In practice, secretariat functions have been performed by individual member AISIs (e.g., the UK AISI, the best-resourced one, has played a central supporting role across projects) and other government agencies (e.g., US Departments of State and Commerce are supporting the upcoming San Francisco meeting), and so are directly funded by national governments. With a secretariat, the Network could relieve the pressure on individual AISIs to informally keep initiatives going, like Network meetings and other joint projects. It could also facilitate continuity across efforts.

Another way in which joint funding could help would be to provide funding to the working groups, allowing them to pursue more ambitious goals. It could also allow members to contribute even if they do not have the resources to fund their own inclusion–especially considering there is significant variation in funding across different AISIs.

If the Network becomes institutionalized, then there might ideally be formulas to determine required financial contributions to the Network. For example, members of some international organizations are required to pay contributions calculated according to factors such as their GDP. In the shorter term, contributions could be voluntary from AISIs or other parts of government. The Network could also consider funding from non-government stakeholders (e.g., companies, philanthropic organizations, civil society organizations), especially for sponsoring specific projects, but would have to set up more robust transparency mechanisms.

3.3 Work together with other international AI governance forums

There are already many international forums for AI governance. Members will need to think about how the international network can avoid duplicating or clashing with them.

Examples include:

  • The 2023 Bletchley “AI Safety Summit” and its successors. Particular care might be needed here to avoid duplication given that the network's history is intertwined with that of these summits.

  • Various AI governance forums that primarily include countries that are aligned with the US. These include the G7 Hiroshima Process, the OECD, and GPAI.

  • International standard-setting processes led by bodies such as the ISO. At Seoul, the international network expressed the intention to accelerate international standards relating to AI safety.

  • UN processes such as the UNESCO Policy Dialogue on AI Governance and the Global Digital Compact.

The Network already differs from many of the forums described above in that it is more focused on implementation, in contrast to high-level principles or agreements. As an example, the San Francisco meeting is expected to be primarily attended by technical and scientific staff rather than diplomats or national leaders.

Additionally, AISIs already have a relatively narrow focus on primarily technical areas. For example, "first-wave AISIs" in Japan, the United Kingdom, and the United States have a focus on the most advanced AI–and, in particular, evaluations of the safety of such systems. By replicating this technical focus, the international network could give itself a more specific mandate, reducing possible overlaps with other forums.

As well as avoiding duplication, the Network can ensure that it works collaboratively with other forums. For example, representatives of other forums could be included in some of the Network’s activities, such as via having observer membership, as described in our tiered membership proposal. Another example would be to connect the plans of the UN to establish an independent scientific panel to create scientific consensus reports about AI impacts and the Network. Another opportunity in a similar domain would be contributing to the continuation of the international scientific report on the safety of advanced AI, currently led by the UK AISI. The combination of the Network, the UN, and other multilateral efforts could help enhance the speed and technical expertise available to the UN and other bodies, and the legitimacy and inclusivity of efforts by the Network. 

At the same time, we do not wish to downplay the importance of topics which would be left out of scope for the Network in this case. Our reasoning is not about whether other topics should be addressed by international policymakers, but rather whether they should be addressed via the network or in some other forum.

Conclusion

The International Network of AI Safety Institutes represents a promising step toward international cooperation on AI safety. While the Network faces important questions about its structure and focus, there are clear paths forward. A tiered membership structure could help balance inclusivity with effectiveness. Working groups and joint funding mechanisms could facilitate practical cooperation between members. By focusing on areas like evaluations, standards, and information-sharing, the Network can complement rather than duplicate other international efforts.

Success will require careful attention to implementation details and sustained commitment from members. However, if these challenges can be navigated effectively, the Network could play a valuable role in building the scientific and technical foundations for safer AI development. As AI capabilities continue to advance, such international cooperation on safety will only become more important.

Previous
Previous

The Institute for AI Policy and Strategy Welcomes Jennifer Marron as Director of Policy and Engagement

Next
Next

Chinese AI Safety Institute Counterparts