AI same risk as nuclear wars, experts warn

Artificial intelligence bosses say mitigating risk of extinction from AI should be ‘global priority’

Anthony Cuthbertson
Wednesday 31 May 2023 10:13 BST
Is it time to panic over AI? | Behind The Headlines

The heads of two of the leading AI firms have once again warned of the existential threat posed by advanced artificial intelligence.

DeepMind and OpenAI chief executives Demis Hassabis and Sam Altman pledged their support to a short statement published by the Centre for AI Safety, which claimed that regulators and lawmakers should take the “severe risks” more seriously.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read.

The Centre for AI Safety, a San Francisco-based non-profit, was founded in 2022 with the aim of reducing “societal-scale risks from AI”, claiming that the use of artificial intelligence in warfare could be “extremely harmful” if used to develop new chemical weapons and enhance aerial combat.

Signatories of the latest statement, which did not clarify what they think may become extinct, also included business and academic leaders in the space.

Among them were Geoffrey Hinton, who is sometimes nicknamed the “Godfather of AI”, and Ilya Sutskever, the chief executive and co-founder respectively of ChatGPT-developer OpenAI.

The list also included dozens of senior bosses at companies like Google, the co-founder of Skype, and the founders of AI company Anthropic.

AI is now in the global consciousness after several firms released new tools allowing users to generate text, images and even computer code by just asking for what they want.

Experts say the technology could take over jobs from humans – but this statement warns of an even deeper concern.

The emergence of tools like ChatGPT and Dall-E have resurfaced fears that AI could one day wipe out humanity if it passes human intelligence.

Earlier this year, tech leaders called on leading AI firms to pause development of their systems for six months in order to work on ways to mitigate risks.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the open letter from the Future of Life Institute stated.

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

Additional reporting from agencies

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies


Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in