Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

There is no evidence that AI can be controlled, expert says

Andrew Griffin
Monday 12 February 2024 17:23 GMT
Comments
Switzerland Davos Forum
Switzerland Davos Forum (Copyright 2024 The Associated Press. All rights reserved)

There is no evidence that artificial intelligence can be controlled and made safe, an expert has claimed.

Even partial controls would not be enough to keep us safe from AI reshaping society, perhaps for the worst, said Roman V Yampolskiy, a Russian computer scientist from the University of Louisville.

Nothing should be taken off the table in an attempt to ensure that artificial intelligence does not put us at risk, Dr Yampolskiy argued.

He said that he had come to the conclusion after a detailed review of the existing scientific literature, which will be published in an upcoming book.

“We are facing an almost guaranteed event with potential to cause an existential catastrophe,” said Dr Yampolskiy in a statement. “No wonder many consider this to be the most important problem humanity has ever faced.

“The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.”

The research for that book – AI: Unexplainable, Unpredictable, Uncontrollable – showed that there is “no evidence” and “no proof” that it would actually be possible to solve the problem of uncontrollable AI.

Since it appears that any AI will not be possible to fully control, it is important to launch a “significant AI safety effort” to ensure that it is made as safe as possible he argues.

But, even then, it may not be possible to protect the world from those dangers: as an AI becomes more capable there are more opportunities for safety failings, so it would not be possible to protect against every danger.

What’s more, many of those AI systems are not able to explain how they came to the conclusions they did. Such technology is already being used in systems such as healthcare and banking – but we might not be able to know how those important decisions were actually made.

“If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers,” Dr Yampolskiy said in a statement.

Even a system that was precisely built to follow human orders might run into issues, he noted: those orders might contradict each other, the system might misinterpret them, or it could be used maliciously.

That could be avoided by using an AI more as an advisor with a human making the decisions. But if it is to do that then it will need its own superior values to help advise humanity on.

“The paradox of value-aligned AI is that a person explicitly ordering an AI system to do something may get a “no” while the system tries to do what the person actually wants. Humanity is either protected or respected, but not both,” Dr Yampolskiy said.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in