Stephen Hawking: Artificial intelligence could wipe out humanity when it gets too clever as humans will be like ants

AI is likely to be ‘either the best or worst thing ever to happen to humanity,’ Hawking said, ‘so there's huge value in getting it right’

Andrew Griffin
Thursday 08 October 2015 15:28 BST
Chinese inventor Tao Xiangli modifies the circuits of his home-made robot at his house in Beijing, May 15, 2013
Chinese inventor Tao Xiangli modifies the circuits of his home-made robot at his house in Beijing, May 15, 2013

Stephen Hawking has warned that artificially intelligent machines could kill us because they are too clever.

Such computers could become so competent that they kill us by accident, Hawking has warned in his first Ask Me Anything session on Reddit.

A questioner noted that Professor Hawking’s ideas about artificial intelligence are seen as “a belief in Terminator-style ‘Evil AI’”, and asked how he would present his own beliefs.

“The real risk with AI isn't malice but competence,” Professor Hawking said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.

“You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let's not place humanity in the position of those ants.”

Hawking said that eventually humans might become cleverer than their creators. Our own intelligence is no limit on that of the things we create, he said: “we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents”.

Stephen Hawking reads “Relativity” By Sarah Howe

If they become that clever, then we may face an “intelligence explosion”, as machines develop the ability to engineer themselves to be far more intelligent. That might eventually result in “machines whose intelligence exceeds ours by more than ours exceeds that of snails”, Hawking said.

Hawking said that it wasn’t clear how long such artificial intelligence would take to develop — warning that people shouldn’t trust “anyone who claims to know for sure that it will happen in your lifetime or that it won't happen in your lifetime”.

But when it does happen, Hawking said, “it's likely to be either the best or worst thing ever to happen to humanity, so there's huge value in getting it right”. As such, we should “shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence”.

“It might take decades to figure out how to do this, so let's start researching this today rather than the night before the first strong AI is switched on,” Hawking said. That echoed the warnings in the open letter about AI that Hawking’s AMA had followed — in it, experts warned that if we are lax about thinking about artificial intelligence, computers will become too clever before we even realise.

Before the robots become so powerful that they accidentally kill us, they might end up taking our jobs. Asked whether the rise of artificially intelligent robots could lead to “technological employment”, Hawking warned that it would depend entirely on how the extra wealth that they create was distributed.

“Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution,” Hawking said. “So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies


Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in