Elon Musk, the US billionaire behind projects such as SpaceX and Tesla, has warned that artificial intelligence is “potentially more dangerous than nukes”.
Musk added that humanity should be "super careful" with such technology, making the comments while recommending Superintelligence, a book by Nick Bostrom that explores the future of humanity when machines surpass us in intelligence.
Bostrom, a Swedish philosopher at the University of Oxford and director of The Future of Humanity Institute there, says that most scientists agree that the creation of a human-level AI is inevitable, reporting that 90 per cent of top researchers guess that we’ll achieve this goal between 2075 and 2090.
However, he argues, the really important issue is what we do with this first super intelligent creation - and how we build it. Whatever AI surpasses human-level intelligence first will have the advantage over pretty much everything and everyone else on Earth.
Bostrom says that if we accidently create an AI that is anything less than well-inclined towards humans (comparisons have been made with the whimsical but ultimately benevolent computer ‘Minds’ in Iain M. Banks’ Culture novels) then the results could be disastrous.
But if we do create a superintelligence that is obedient or endowed with a sense of ethics like Isaac Asimov’s Three Laws of Robotics then the rewards could also be staggering, accelerating human progress at unimaginable rates. After all, how can we even begin to imagine what a post-human intelligence is capable of when we are still resolutely human?
Musk, however, is apparently inclined towards gloomier predictions, with his subsequent tweet imagining humanity as the “biological boot loader” (the preliminary bit of software that loads an operating system) for a “digital superintelligence”. Thanks, Elon, we’ve seen the Matrix too.
Join our new commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies