Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission. 

In Focus

Britain has made a huge miscalculation – the era of the AI unemployables is here

While the government heralded plans for Britain’s first automated research lab, leading AI expert Professor Yu Xiong, who previously chaired the UK all-party parliamentary group on the metaverse and web 3.0, says there is a shocking lack of preparedness for the potential consequences

It is estimated that in Britain alone, eight million jobs could now be exposed to AI-powered automation
It is estimated that in Britain alone, eight million jobs could now be exposed to AI-powered automation (Getty/iStock)

When the governor of the Bank of England, Andrew Bailey, warns that AI will displace workers as profoundly as the industrial revolution, the era of blinkered optimism is over. Britain has spent decades assuming that services and research would protect it from its post-industrial decline. Now, as we enter the automated age, that promise is truly collapsing.

At the end of last year, Google’s AI arm, DeepMind, unveiled plans for Britain’s first automated research lab – one strand of Silicon Valley’s $40bn push into new AI infrastructure. While ministers welcomed it as evidence that Labour’s global AI ambitions were finally taking shape, such promises deserve caution.

Many assumed that, unlike the automation that hollowed out British agriculture and manufacturing jobs, research and development would remain a stubbornly human preserve. DeepMind is ending that illusion.

Part of the miscalculation lies in what we think AI does. Many believe it understands ideas in broadly human terms. But as experts like Sachin Dev Duggal and Gary Marcus point out, today’s systems are not reasoners at all. They are powerful statistical engines – “reasoning parrots” – extracting patterns from existing work and turning knowledge into repeatable processes machines can perform faster than people. That is why so much supposedly cognitive work has proved easier to automate than expected.

There is no mystery how this ends. As far back as 2016, at Foxconn, a supplier in China for Apple and Samsung, more than 60,000 factory workers were replaced by robots. Ten years on, according to recent reports, Amazon will use more robots in its warehouses than human employees.

Already more than 1 million machines are deployed across facilities doing the heavy lifting warehouse work, picking items down from tall shelves and moving goods around facilities. According to The Wall Street Journal, others are becoming advanced enough to help humans sort and package orders.

In December, speaking on BBC Radio 4’s Today programme, the Bank of England’s governor, Andrew Bailey, suggested the widespread adoption of AI could mirror previous profound societal shifts. Mr Bailey said: “As you saw in the industrial revolution, now over time, I think we can now sort of look back and say it didn’t cause mass unemployment, but it did displace people from jobs, and this is important.

“My guess would be that it’s most likely that AI may well have a similar effect. So we need to be prepared for that, in a sense.”

‘Godfather of AI’ Geoffrey Hinton is one of many experts to warn of the impact of AI on jobs
‘Godfather of AI’ Geoffrey Hinton is one of many experts to warn of the impact of AI on jobs (AP)

However, just a few days ago, Geoffrey Hinton, the computer scientist known as the “godfather of AI”, was even more stark in his unequivocal assessment. In an interview on CNN’s State of the Union, he said that AI will have the “capabilities to replace many, many jobs” in 2026.

“We’re going to see AI get even better. It’s already extremely good,” Hinton said. “Each seven months or so, it gets to be able to do tasks that are about twice as long.” Noting that AI has already moved from “a minute’s worth of coding” to “whole projects that are, like, an hour long”, he added: “In a few years’ time, it’ll be able to do software engineering projects that are months long, and then there’ll be very few people needed.”

It is estimated that in Britain alone, eight million jobs could now be exposed to AI-powered automation. In earlier transitions, those displaced could regain economic footing elsewhere. Today, automation cuts across sectors at once, from manufacturing to white-collar jobs, leaving fewer routes back into paid work. Research and development jobs will fare no better. And once even this middle-class refuge gives way, what comes next?

The worry is that those in power don’t understand the gravity of the situation. In my recent work chairing the all-party parliamentary groups on the metaverse, web 3.0, and blockchain, the mood inside the government was not one of ignorance so much as a structural mismatch. Ministers, advisers, and civil servants focused on competitiveness – on not “missing the next wave” of growth – while the consequences of automation were treated as downstream problems.

Gary Marcus, Professor Emeritus at New York University, has warned that AI systems are not ‘reasoners’
Gary Marcus, Professor Emeritus at New York University, has warned that AI systems are not ‘reasoners’ (AFP/Getty)

What was concerning to me was how little attention was being given to how these technologies would reshape labour and how quickly it would happen. AI policy prioritised acceleration over preparedness and the worry is that governments will be left racing to respond to social consequences that will arrive faster than its institutions are designed to handle.

The risk is that this narrow focus on growth produces something far less benign: An AI-driven underclass, permanently displaced from work and pushed towards dependence – a modern lumpenproletariat. In my view, there wasn’t and isn’t enough thought given to just how little capacity the state had to manage large-scale displacement.

And this is no longer a sometime-in-the-future problem, either. The outlines of this new caste are already visible. A recent King’s College study found that firms most exposed to AI reduced employment by 4.5 per cent, while job listings fell by nearly a quarter. The study found that high-paying firms and professional occupations had experienced the most significant declines in employment and wages due to AI.

This matters because social spending is already at record levels, with close to a third of a trillion pounds being devoted to welfare. At the same time, millions are economically inactive through sickness or exclusion. Add another cohort of AI unemployables – especially in the high-wage sectors who have traditionally propped up Britain’s economic and social model, and you can see how it begins to fail. While conversations are being had about universal basic income, a lot fewer are being had about who exactly is going to pay for it.

Doom prepping by tech billionaires isn’t because they are worried that the robots will turn their guns onto humans or instigate a catastrophic biotech event. Those at the top of the tech tree understand that it won’t be war or plague that drives anyone underground. If AI hollows out the middle class – the most productive part of society, responsible for two-thirds of income tax receipts – the entire system collapses. It may be that those fretting about billionaires busy building their bunkers are onto something.

Google DeepMind’s Aeneas AI has been used to decipher context in ancient Roman Latin texts
Google DeepMind’s Aeneas AI has been used to decipher context in ancient Roman Latin texts (Robbe Wulgaert)

You don’t need to indulge apocalyptic fantasies to see the danger. Displaced workers often seek narratives that give their anger shape. Left feeling underpaid, underappreciated and under-employed, people quickly mobilise and look for someone to blame. This ire and dissatisfaction can be readily exploited by those offering easy yet untested answers. Even industry voices warn that large-scale AI-driven displacement, if left unmanaged, carries the seed of social unrest.

But perhaps the biggest crisis of all lies in what happens to our collective self-worth. AI now presents a paradoxical promise – liberation through replacement. Machines can already outperform us in precision, endurance, and analysis. Yet in modern Britain, vocation is not just how people earn a living, it is the organising principle of adult life. Remove that at scale and what follows is a loss of identity that neither welfare nor rhetoric can easily repair.

That is why, as we enter the new year, this moment is not only about jobs, but the future of social purpose. And this is where AI forces us to confront the question we have spent decades avoiding.

Marx once sought to free humanity from the alienation of labour. Automation may yet complete that project, not through revolution, but through technological inevitability – much as the cotton gin displaced manual work. But if its gains accrue only to those who own the systems, efficiency quickly curdles into hierarchy. As wealth and power concentrate in the hands of the few who own the machines, without safeguards, the digital age could remake economic dependence into a kind modern serfdom masked as progress.

The danger is that the government remains stuck measuring progress through paid employment, while the nature of work quietly changes underneath it. But an AI future could point towards an opportunity too – what could be described as Human 3.0. Not a world of leisure, per se, but one in which fewer people live to work and instead live to generate value for themselves and others. If creation, ownership and autonomy begin to matter as much as wages, we could move towards what I call “participatory economy” – one that recognises value beyond monies earned.

Cultural production, local leadership and civic work already sustain society, yet sit outside the labour market. If automation is to work politically, its gains must be shared through participation and dividends, not simply higher returns to capital. Ultimately, as a society, we need to understand how to measure, celebrate and reward human contribution beyond the wage packet. Efficiency alone is no longer a valid metric of success. The difficulty is how we measure this worth; contribution outside employment is hard to see, let alone reward. Could AI help us with that?

Professor Yu Xiong believes the UK government’s focus on growth may be misplaced
Professor Yu Xiong believes the UK government’s focus on growth may be misplaced (Youtube/Peace One Day)

Today’s large language models (LLMs), are powerful pattern learners that can imitate reasoning, but they don’t reliably represent meaning. They forget, hallucinate facts, and are only as sound as the data they are trained on. Anyone who has watched LLMs fabricate sources has seen how easily popularity can be mistaken for usefulness. Left unchecked, AI risks reproducing the same distortions that plague the digital economy.

But AI is only in its infant years. Researchers are already exploring approaches that move beyond surface-level pattern matching. One such approach is neurosymbolic AI, which marries statistical learning with logic and structured reasoning. Experiments are already underway on how AI might one day help distinguish genuine contribution from noise. The real revolution could come when contribution can be truly recognised without treating wages as the sole signal of value. Dubbed a new frontier of intelligence, the idea is to create a new kind of intelligence, one that, as a researcher from SeKondBrain puts it ‘feels less like a tool, and more like a partner in thought’.

We need to think in terms of fairness, transparency and sustainability. Technology should enhance these principles, not erode them. Technology needs to serve humanity, not the other way around. If Britain gets this wrong, automation will wipe out current recognised contributions faster than society can replace it. And unless there is a significant change of thinking at the top, the state will be wholly unprepared to absorb the fallout.

Professor Yu Xiong is a Fellow of Academy of Social Sciences, and founder of the Surrey Academy for Blockchain and Metaverse Applications. A leading expert in AI, he previously chaired the UK all-party parliamentary group on metaverse and web 3.0

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in