Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

ChatGPT creator says there’s 50% chance AI ends in ‘doom’

Warnings of artificial intelligence apocalypse continue to grow

Anthony Cuthbertson
Wednesday 03 May 2023 05:59 BST
Comments
AI 'godfather' warns of 'existential risk' of robotic intelligence after quitting Google
Leer en Español

One of the creators of ChatGPT has added to a growing chorus of researchers warning of the potentially catastrophic consequences of artificial intelligence development.

Former OpenAI worker Paul Christiano, who now runs AI research non-profit Alignment Research Center, said he believed there was a significant chance that the technology would lead to the destruction of humanity.

The main danger, he claimed, will come when AI systems reach and surpass the cognitive capacity of a human. Dr Christiano predicts there is a “50/50 chance of doom” once this moment arrives.

“I tend to imagine something like a year’s transition from AI systems that are a pretty big deal, to kind of accelerating change, followed by further acceleration, et cetera,” he told the Bankless podcast.

“I think once you have that view then a lot of things may feel like AI problems because they happen very shortly after you build AI.”

Former OpenAI researcher Paul Christiano now runs the Alignment Research Center (Screengrab/ YouTube)

He added: “The most likely way we die involves – not AI comes out of the blue and kills everyone – but involves we have deployed a lot of AI everywhere... [And] if for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us.”

The comments come amid increased concerns surrounding the rapid advancement of artificial intelligence in recent months, with the so-called godfather of AI Geoffrey Hinton quitting Google to sound the alarm about the dangers of AI.

Speaking to The New York Times, he said he regretted the work that he had contributed to the field due to the unpredictable future we now face.

“The idea that this stuff could actually get smarter than people – a few people believed that – but most people thought it was way off. And I thought it was way off. I thought it was 30-50 years or even longer away. Obviously I no longer think that,” he said.

“I don’t think they should scale this up more until they have understood whether they can control it.”

His stance has been praised by other researchers, with AnthropicAI’s Catherine Olsson saying it may encourage others within the field to speak up.

“In college I stopped eating meat, on the spot, when a friend asked why I hadn’t yet. Social checks on our ethics can be so influential,” she tweeted. “I often think about when I would quit Anthropic or leave AI entirely. I encourage others to. I can already tell this move will influence me.”

Other prominent figures have also urged AI firms to pause development on advanced systems, most recently through an open letter signed by thousands of experts that urged governments to step in if artificial intelligence development was not paused for at least six months.

Among the signatories was Elon Musk, who has frequently spoken about the existential threat posed by AI. The tech billionaire, who co-founded OpenAI, tweeted on Monday: “Even benign dependency on AI/Automation is dangerous to civilization if taken so far that we eventually forget how the machines work.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in