The current wave of generative AI tools like ChatGPT will soon be surpassed by “interactive artificial intelligence”, according to AI pioneer Mustafa Suleyman.
The co-founder of DeepMind, which was acquired by Google for $500 million in 2014, said the next generation of AI tools will be “a step change in the history of our species”, allowing people to not just obtain information but also order tasks and services to be carried out on their behalf.
“The first wave of AI was about classification. Deep learning showed that we can train a computer to classify various types of input data: images, video, audio, language. Now we’re in the generative wave, where you take that input data and produce new data,” Mr Suleyman told MIT Technology Review.
“The third wave will be the interactive phase. That’s why I’ve bet for a long time that conversation is the future interface. You know, instead of just clicking on buttons and typing, you’re going to talk to your AI.”
This will allow users to ask these AI to perform tasks for them, which they will carry out by talking with other people and interacting with other AIs.
“That’s a huge shift in what technology can do. It’s a very, very profound moment in the history of technology that I think many people underestimate,” he said.
“Technology today is static. It does, roughly speaking, what you tell it to do. But now technology is going to be animated. It’s going to have the potential freedom, if you give it, to take actions. It’s truly a step change in the history of our species that we’re creating tools that have this kind of, you know, agency.”
When questioned about the potential risks of giving artificial intelligence autonomy, Mr Suleyman said it was important to set boundaries for the technology and make sure that it is aligned with human interests.
When Mr Suleyman was still working at DeepMind, his colleagues helped develop what became known as a “big red button” that would effectively serve as an off switch for rogue AI.
A research paper titled ‘Safely Interruptible Agents’ described how any misbehaving robot could be shut down or overriden by a human operator in order to avoid “irreversible consequences”.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies