Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

AI isn’t falling into the wrong hands – it’s being built by them

These are essential, urgent questions that will require input from a wide diversity of voices – especially of those who are most likely to be imperilled by the use of AI

Arthur Holland Michel
Saturday 06 May 2023 13:58 BST
Comments
AI 'godfather' warns of 'existential risk' of robotic intelligence after quitting Google

Geoffrey Hinton, dubbed the “godfather of AI”, has joined a growing chorus of voices urging more caution in the development of artificial intelligence, warning that it is hard to “prevent the bad actors from using it for bad things”.

But it’s not just the despots and tyrants who keep me awake at night – it's the engineers, executives and politicians tasked with making science fiction a reality. The loudest voices in the AI discourse tend to exhibit a startling level of misplaced certainty about what AI is capable of and how the technology can be controlled.

In a new research project for Chatham House, I found that the national AI policies of dozens of countries, from China to Chile, are built on the same set of flawed assumptions about how to harness AI’s awesome power while minimising its risks – assumptions oft-repeated, but for which scientific basis is perilously thin on the ground.

Take, for example, the claim that AI can be aligned to our ethical principles. Can it? While governments and companies tend to be realistic about the fact that more work is needed for ethical AI to be a reality, they never ask whether the very notion of ethical AI is even a technical possibility – especially if companies like Microsoft keep laying off the teams responsible for such delicate matters.

For example, the notion of "unbiased" AI is pure fiction. And yet we are barely engaging sufficiently with the uncomfortable questions of what counts as an acceptable bias, and what doesn’t – let alone the even thornier question of who should have the authority to decide which is which.

As distasteful as calls for anti-woke AI from Elon Musk and others might be, they do highlight an inconvenient truth: endowing any single entity with the authority to decide what generative AI should and shouldn’t be allowed to say and do will inevitably run afoul of common principles of freedom and equality.

What’s more, countries are beginning to take steps to enforce AI ethics. This might seem like a good thing. But when you consider that the law in some of those countries treats homosexuality as unethical, that prospect starts to seem rather less cheery.

Here’s another assumption. Everyone talks about AI in terms of a race among states and companies, with the richest spoils reserved for who moves first and fastest. Are we sure? Having more companies beavering away on AI than your geopolitical adversaries may give you an economic or military edge. But is this arms race compatible with the kinds of safe, fair AI that serve not only the economic interests of the elite but also those society’s most vulnerable and dispossessed members? Perhaps not.

In some countries, including the UK, it is a matter of policy to act as if "artificial general intelligence" – AI of the kind that Hinton warns about – could someday become real. However, if anything, the same tech executives who have been most effective in convincing governments about the threats of superintelligent are rushing to build the kinds of AI models that will make those risks a reality.

It’s tempting to think that they are right about these predictions, but we might pause to ask whether they have a hidden motive in convincing us that we're in store for an imminent AI apocalypse. After all, a key part of their narrative is that the only way to stop bad AI is by embracing good AI – and that, of course, is their AI. Perhaps a more cautious, humanistic approach, one that looks beyond the tech realm for solutions to these problems, would be safer for everyone in the long run.

These are essential, urgent questions that will require input from a wide diversity of voices, especially of those who are most likely to be imperilled by the use of AI. So far, the earliest victims of malign forms of AI, such as inaccurate facial recognition tools or deepfake porn generators, have been members of historically marginalised groups, including women and people of colour. But these groups are woefully underrepresented among the loudest voices in the AI debate.

The predominant assumptions of AI policy tend to reflect the views of only a narrow set of stakeholders, while neglecting the interests of those most likely to be harmed by all kinds of AI, regardless of whether it is superintelligent or not. And by treating these stakeholders' opinions as facts, we are slamming the door on a truly open, good-faith dialogue. That needs to change.

It is time to recalibrate how we talk about AI. It is very challenging to make a chatbot that admits to being uncertain about what it’s saying. We can only hope that making humans fess up when they don’t know the answers is somewhat easier.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in