Britain’s Prime Minister Rishi Sunak speaks during the closing press conference on the second day of the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park, near Milton Keynes, Britain November 2, 2023. Justin Tallis/Pool via REUTERS/File Photo

SEOUL, May 21 (Reuters) – Sixteen companies involved in AI including Alphabet’s Google GOOGL.O, Meta META.O, Microsoft MSFT.O and OpenAI, as well as companies from China, South Korea and the United Arab Emirates have committed to safe development of the technology.

The announcement unveiled in a UK government statement on Tuesday came as South Korea and Britain host a global AI summit in Seoul at a time when the breakneck pace of AI innovation leaves governments scrambling to keep up.

The agreement is a step up from the number of commitmentsat the first global AI summit held six months ago, the statement said.

Zhipu.ai, backed by Chinese tech giants Alibaba 9988.HK, Tencent 0700.HK, Meituan 3690.HK and Xiaomi 1810.HK, as well as UAE’s Technology Innovation Institute were among the 16 companies pledging to publish safety frameworks on how they will measure risks of frontier AI models.

The firms, also including Amazon AMZN.O, IBM IBM.N and Samsung Electronics 005930.KS, voluntarily committed to not develop or deploy AI models if the risks cannot be sufficiently mitigated, and to ensure governance and transparency on approaches to AI safety, the statement said.

“It’s vital to get international agreement on the ‘red lines’ where AI development would become unacceptably dangerous to public safety,” said Beth Barnes, founder at METR, a non-profit for AI model safety.

The artificial intelligence (AI) summit in Seoul this week aims to build on a broad agreement at the first summit held in the United Kingdom to better address a wider array of risks.

Advertisement

At the November summit, Tesla’s Elon Musk and OpenAI CEO Sam Altman mingled with some of their fiercest critics, while China co-signed the “Bletchley Declaration” on collectively managing AI risks alongside the United States and others.

British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol will oversee a virtual summit later on Tuesday, followed by a ministerial session on Wednesday.

This week’s summit will address “building… on the commitment from the companies, also looking at how the (AI safety) institutes will work together,” Britain’s Technology Secretary Michelle Donelan told Reuters on Tuesday.

Since November, discussion on AI regulation has shifted from longer-term doomsday scenarios to “practical concerns” such as how to use AI in areas like medicine or finance, said Aidan Gomez, co-founder of large language model firm Cohere.

Industry participants wanted AI regulation that will give clarity and security on where the companies should invest, while avoiding entrenching big tech, Gomez said.

With countries such as the UK and U.S. establishing state-backed AI Safety Institutes for evaluating AI models and others expected to follow suit, AI firms are also concerned about the interoperability between jurisdictions, analysts said.

Advertisement

Representatives of the Group of Seven (G7) major democracies are expected to take part in the virtual summit, while Singapore and Australia were also invited, a South Korean presidential official said.

China will not participate in the virtual summit but is expected to attend Wednesday’s in-person ministerial session, the official said.

South Korea’s foreign ministry said Musk, former CEO of Google Eric Schmidt, Samsung Electronics’ Chairman Jay Y. Lee and other AI industry leaders will participate in the summit.

(Reporting by Joyce Lee; Editing by Ed Davies and Christian Schmollnger)