The arrival of human-level artificial intelligence may be a lot closer than previously thought, according to leading AI researchers.
The point that artificial general intelligence (AGI) exceeds human intelligence, referred to as the AI singularity, has been a subject of debate among AI researchers and futurologists for many years, though most forecasts predict the hypothetical date is still decades away.
In a far-reaching blog post about artificial intelligence safety, AI research firm Anthropic detailed how the “very rapid progress” of artificial intelligence would likely continue rather than stall or plateau, meaning AI could overtake humans within years.
“People tend to be bad at recognising and acknowledging exponential growth in its early phases,” the 6,500-word blog post stated.
“Although we are seeing rapid progress in AI, there is a tendency to assume that this localised progress must be the exception rather than the rule, and that things will likely return to normal soon.
“If we are correct, however, the current feeling of rapid AI progress may not end before AI systems have a broad range of capabilities that exceed our own capacities. Furthermore, feedback loops from the use of advanced AI in AI research could make this transition especially swift.”
The outcome of such advances, according to Anthropic, would be that “most or all knowledge work may be automatable in the not-too-distant future”. If correct, this would also have major implications for the rate of progress of other technologies, and therefore society more generally.
The blog post builds on previous comments by Anthropic co-founder Jack Clark, who said last month that he believed AI has started to display “compounding exponential” properties.
Similar comments have been made by other prominent AI researchers, with DeepMind’s Nando de Freitas claiming last year that “the game is over” in the decades-long quest to realise AGI.
The creator of ChatGPT has also said that new artificial intelligence tools will soon “make ChatGPT look like a boring toy”, leading to problems that it may not be possible to anticipate.
Sam Altman, chief executive and co-founder of OpenAI, claimed that ChatGPT is “incredibly limited” and creates a “misleading impression of greatness”, though said that future versions of the technology will be radically improved.
“There will be scary moments as we move towards AGI-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there,” he said in December.
The successor to ChatGPT, called GPT-4, is expected to be released in the coming weeks.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies