AI could be the potential “Great Filter” answer to the Fermi paradox, with the potential to wipe out intelligent life in the universe before it can make contact with others, suggested the yet-to-be peer-reviewed study, posted in the arXiv preprint server.
The Fermi Paradox, captured popularly by the phrase “Where is everybody?”, has puzzled scientists for decades. It refers to the disquieting idea that if extraterrestrial life is probable in the universe, then why have humans not encountered it yet.
Many theories have been proposed, offering different explanations to our solitary presence so far in the cosmos.
Even though probability calculations, such as the popular Drake Equation, suggest there could be number of intelligent civilisations in the galaxy, there is still a puzzling cosmic silence.
One popular hypothesis – known as the Great Filter – suggests that some event required for the emergence of intelligent life is extremely unlikely, hence the cosmic silence.
A logical equivalent of this theory is that some catastrophic cosmic phenomenon is likely preventing life’s expansion throughout the universe.
“This could be a naturally occurring event, or more disconcertingly, something that intelligent beings do to themselves that leads to their own extinction,” wrote study author Mark Bailey from the National Intelligence University (NIU) in the US.
The new research theorised that AI advancement may be the exact kind of catastrophic risk event that could potentially wipe out entire civilisations.
In the study, Dr Bailey frames the context of the Great Filter within the potential long-term risk of technologies like AI that we don’t fully understand.
“Humans are terrible at intuitively estimating long-term risk,” the NIU scientist said, adding that we do not fully understand AI, yet “it is rapidly infiltrating our lives”.
“Future AI will likely tend toward more generalizable, goal-directed systems with more meaningful control, where the consequences of unintended outcomes will become significantly more severe,” he warned.
Dr Bailey posited what he calls the “second species argument”, which raises the possibility that advanced AI could effectively behave as a “second intelligent species” with whom we would eventually share this planet.
Considering what happened when modern humans and Neanderthals coexisted on Earth, NIU researchers said the “potential outcomes are grim”.
“It stands to reason that an out-of-control technology, especially one that is goal-directed like AI, would be a good candidate for the Great Filter,” Dr Bailey wrote in the study.
“We must ask ourselves; how do we prepare for this possibility?”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies