Safety institutes to form ‘international network’ to boost AI research and tests

Ten nations and the EU have agreed to established a network of AI Safety Institutes, but China is not among them.

Martyn Landi
Tuesday 21 May 2024 16:00 BST
Rishi Sunak hailed ‘international progress’ (PA)
Rishi Sunak hailed ‘international progress’ (PA) (PA Wire)

Support truly
independent journalism

Our mission is to deliver unbiased, fact-based reporting that holds power to account and exposes the truth.

Whether $5 or $50, every contribution counts.

Support us to deliver journalism without an agenda.

Louise Thomas

Louise Thomas


Ten nations and the European Union have agreed to establish an international network of publicly backed AI Safety Institutes to advance global research and testing of AI.

Prime Minister Rishi Sunak said the agreement would mean “international progress” could be made on AI safety, after it was announced at the end of the first day of the AI Seoul Summit.

The UK announced it would create the world’s first AI Safety Institute during the AI Safety Summit held at Bletchley Park in November last year, to carry out research and voluntary evaluation and testing of AI models, with a number of other countries since announcing their own domestic institutes.

The newly signed “Seoul Statement of Intent toward International Cooperation on AI Safety Science” will see the network of institutes share research, including details about models they have studied, with the aim of advancing global understanding of the science around artificial intelligence.

Alongside the UK, the United States, Australia, Canada, France, Germany, Italy, Japan, South Korea, Singapore, and the EU signed the agreement, but one global AI powerhouse – China – was notably absent, and was not represented during the virtual meeting hosted by Mr Sunak and South Korean president Yoon Suk Yeol.

However, the Department for Science, Innovation and Technology (DSIT) has said the Chinese government is taking part in the wider summit, and a Chinese firm – – did sign a new safety agreement alongside other tech firms earlier in the day.

“AI is a hugely exciting technology – and the UK has led global efforts to deal with its potential, hosting the world’s first AI Safety Summit last year,” Mr Sunak said.

“But to get the upside we must ensure it’s safe. That’s why I’m delighted we have got agreement today for a network of AI Safety Institutes.

“Six months ago at Bletchley we launched the UK’s AI Safety Institute. The first of its kind. Numerous countries followed suit and now with this news of a network we can continue to make international progress on AI safety.”

As part of the talks, world leaders also signed the Seoul Declaration, which declared the importance of enhanced international cooperation to develop AI safely and used to solve major global challenges and bridge divides.

In addition to world leaders, the virtual meeting was also attended by a number of key figures from leading tech and AI firms, including Elon Musk, former Google chief executive Eric Schmidt and DeepMind founder Sir Demis Hassabis.

Technology Secretary Michelle Donelan, who is in Seoul and will co-host the second day of talks on Wednesday, said: “AI presents immense opportunities to transform our economy and solve our greatest challenges – but I have always been clear that this full potential can only be unlocked if we are able to grip the risks posed by this rapidly evolving, complex technology.

“Ever since we convened the world at Bletchley last year, the UK has spearheaded the global movement on AI safety and when I announced the world’s first AI Safety Institute, other nations followed this call to arms by establishing their own.

“Capitalising on this leadership, collaboration with our overseas counterparts through a global network will be fundamental to making sure innovation in AI can continue with safety, security and trust at its core.”

The new agreements from world leaders comes after 16 major AI firms committed to a set of safety procedures and to publish frameworks on how they will measure the risks of their AI models and thresholds for when they cease development or deployment of a model announced earlier in the summit.

Amazon, Google, Microsoft, Meta and OpenAI are among the companies which have signed up to the Frontier AI Safety Commitments.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in