In focus

As Victoria Coren shows computer malfunction could be a serious problem when it comes to our bills

As Victoria Coren Mitchell becomes the latest high profile figure to claim money was wrongly taken from her account by an energy company, Charles Arthur looks at how this is becoming an increasing problem and asks is our blind faith in computer systems misguided?

Wednesday 13 March 2024 13:43 GMT
Comments
Only Connect TV presenter Victoria Coren Mitchell claimed Ovo Energy had ‘wrongly’ taken thousands of pounds from her bank account
Only Connect TV presenter Victoria Coren Mitchell claimed Ovo Energy had ‘wrongly’ taken thousands of pounds from her bank account (BBC/Parasol Media Limited/Rory Lindsay)

Post Office helpline staff had a script when subpostmasters and postmistresses called about their problems with Horizon computers. “Nobody else is having these problems but you”, they would say. Thus “helpline” at once became unhelpful – more of a “helplie”.

Now we know the awful truth about a system whose flaws were baked in, where the humans who suffered because of them weren’t believed – and, even worse, where denial became an ingrained response to each new complaint. But why were the computers trusted so readily, even when some people knew they were faulty?

Daily life shows us that people – more specifically, organisations – trust computers probably more than they should. This week, broadcaster Victoria Coren Mitchell claimed Ovo Energy had ‘wrongly’ taken thousands of pounds from her bank account. Reacting to her social media post, hundreds of others shared their own horror stories of being wrongly issued with bills and demands from energy companies.

In December newsreader, and columnist for this newspaper, Jon Sopel, discovered his personal energy bill from EDF had vaulted from £152 per month to £19,000. A few days earlier, the artist Grayson Perry had seen his leap from £300 to £39,000. Both then had to grapple with customer services (doubtless experiencing a higher-than-expected volume of calls) to unravel the problem.

OVO Energy said they were ‘always striving to provide the best possible experience for all our customers’, but you have to ask why wouldn’t any energy company pause before increasing a bill more than one hundred-fold like we have seen in some cases? When Sopel and Perry publicised their shock on social media, drawing responses from many others who had had similar problems, EDF insisted – in a familiar-sounding phrase – that there was not “a wider problem” with its billing system. But with five million domestic customers, even a narrow problem could, realistically, affect hundreds or thousands.

System error: artist Grayson Perry was wrongly issued with an astronomical energy bill
System error: artist Grayson Perry was wrongly issued with an astronomical energy bill (Nick Mailer)

Ever since 1951 when the first commercial computer, the Lyons Electronic Office (LEO), was used to calculate ingredient costs for cakes and bread at a London-based business, the gears of industry have spun faster and faster on the wheels of computation that reach further and further into our lives.

At first, their errors were treated as a joke. “Send them a bill for a million pounds, Miss Jones,” reads one 1970s cartoon of a manager in a small office with his secretary. “They’ll think we’ve got a computer.”

But the joke has worn thin, and Horizon and the EDF billing issues demonstrate how humans are an essential grit in those whirring gears. If Perry or Sopel left the payment of their bills entirely up to computers, without oversight, they’d soon be bankrupt. If they hadn’t paid and EDF hadn’t taken up their complaints, then they would have risked it affecting credit scores, along with who knows how many bailiff warnings and added bills. Without the tenacity of Alan Bates and the other victims of the Horizon system who insisted that the computer, not the user, was at fault, a colossal injustice would never have been corrected.

There are plenty of other examples. Sometimes the errors are comedic: Google this month updated its Maps system to stop drivers trying to head down Greenside Lane in Edinburgh – which replaced the road with steps last year, but sometimes, like with the Post Office, errors have serious and life-changing consequences.

In Australia, a system called Robodebt was introduced in 2016 by an incoming government, aiming to recoup any overpayment of benefits. It scanned benefit recipients’ bank accounts every fortnight for income that exceeded their average, and issued a “debt” notice for any discrepancy where someone seemed to be earning more than expected.

However, income from casual work can vary from week to week and month to month; which triggered thousands of incorrect “debt” claims. Even worse, notes Chiraag Shah of Oxford University’s Blavatnik School of Government in a report on Robodebt, “welfare recipients had to disprove overpayment” – a reversal of the usual burden of proof, but crushingly familiar to everyone who has ever dealt with an absurd automated payment demand.

In all, AUS$746m (£391m) was wrongly taken from 381,000 people; some took their own lives. The Australian government finally wrote off AUS$1.75bn of debts in 2020. Is the problem that we trust computers too much? The journalist Ros Taylor, whose book The Future of Trust is published next month, says we need to rethink our relationship with them.

“The idea of trusting a computer is fundamentally misconceived,” Taylor told me. “Trust is a moral relationship and computers are a tool created for people to perform a certain function.”

And computer systems are being entrusted with some tasks that have serious ethical issues. They can decide whether someone could live or die, for example “the semi-autonomous weapons like the one that killed al-Qaida leader Ayman al-Zarahiri, or asylum seekers whose cases might be decided by an algorithm”.

Ethics also comes into the equation when there are financial incentives to rubber-stamp a computer decision, which could make it harder to question that decision. For example, some Post Office staff received bonuses for every Horizon conviction.

Since 1984, with the introduction of the Police and Criminal Evidence (PACE) Act, the output of computers is assumed to be correct and admissible as evidence in court unless a malfunction can be proven. But, how many of us are qualified to prove a computer can be wrong? Whether the system is Horizon or it facilitates an energy company’s bills, every computer we deal with is assumed to be right, and if we disagree, we are assumed to be wrong.

OpenAI has been known to have ‘hallucinations’ whereby it has made up ‘fantasy’ court cases to win an argument
OpenAI has been known to have ‘hallucinations’ whereby it has made up ‘fantasy’ court cases to win an argument (AP)

“For all the principled chatter about not letting AI make decisions which could harm people,” Taylor says, “humans will seize the opportunity to delegate responsibility to a computer if it enables them to avoid moral responsibility.”

How do computer systems with known flaws get through the approval process, though? David, who asked to remain anonymous, has long experience in computing and now works in IT security at a large bank. “In a project, there are a number of steps to ensure quality,” he explained to me. “They can be – and, to be frank, frequently are – ‘shortened’ in the interests of speed and delivery, usually dictated by a salesperson overpromising, or a project manager under-managing, or cost. Bugs become tolerated, workarounds developed.”

With Horizon, David explains, “someone knew the issues were there. And someone, at some level, chose to either ignore this, or actively cover it up. Once you start, there are a thousand reasons why you might find it difficult to get off.” Like a ship being launched, a big software project heading down the slipway is effectively impossible to stop. The only hope is to fix the problems later.

Fujitsu knew about problems with Horizon. An internal bug report dated 28 June 1999 noted that a previous fix for the accounting software had in turn created a new problem which would wrongly multiply the size of the cash account if the user deviated even slightly from a specific procedure – what programmers call a “golden path”, common in early or buggy software. (When Steve Jobs showed off the first iPhone in January 2007, he followed a carefully choreographed demo – a golden path – because any deviation could have caused an embarrassing crash.)

But that catastrophic bug wasn’t treated as urgent, even though the programmer who found it warned (in capitals) that it should be highlighted to the helpline staff. Instead, the very next day, the problem was downgraded to “priority B” by a manager. The ship was in the water and starting to sail.

“It is entirely feasible that a staggering level of incompetence caused the initial issues,” David, the IT security expert observes. “However imagine a graph with two diagonal lines going in opposite directions, one labelled ‘incompetence’, the other ‘lying cover-up’. As time goes on the incompetence recedes, but the cover-up increases.”

Many subpostmasters were led to believe they were wrong as they searched for money that computers insisted was missing
Many subpostmasters were led to believe they were wrong as they searched for money that computers insisted was missing (Getty and Graham Livesey)

Perhaps, though, we’re heading towards a new era in our stance on safeguarding and computer systems. Ironically it’s the newest advances that could enable this. AI systems such as ChatGPT from OpenAI and Google’s Bard system use “large language models” that give plausible answers to any question you throw at them. The trouble is the answers can be “hallucinations” – entirely fictional. In the US and UK, court cases have been lost because people have asked ChatGPT for precedents to back up their claims, and then discovered that no such cases ever existed. Stung by computers getting things wrong themselves, lawyers now know not to put their trust in AI; other professional classes will probably follow. And, we can expect more technology professionals called to the “stand” as expert witnesses to explain how things can go wrong with computers, just as doctors are called when in-depth medical knowledge is required for a case.

While legendary computer scientist Professor Niklaus Wirth famously wrote in 1995: “A system that is not understood in its entirety, or at least to a significant degree of detail by a single individual, should probably not be built.”

Taylor notes, “hour by hour in modern society we have to place our trust in things we don’t understand – financial systems, cars, the ingredients of a supermarket sandwich”. And, this only happens because of accumulated trust in businesses, governments and the power of the law.

But if Horizon, Robodebt and Greenside Lane can teach us anything, it’s that we should always pause before giving them our absolute trust.

Charles Arthur is the author of ‘Social Warming: How Social Media Polarises Us All’

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in