Rhodri Marsden: They may have passed the test, but the robots aren't taking over yet


The news was announced with such fanfare and excitement that you'd have forgiven us for bracing ourselves for psychological subjugation at the hands of an army of sentient robots. "Turing test passed for the first time", screamed the headlines, before going on to explain that a Russian-produced chatbot had managed to convince a panel of humans attending an event at the Royal Society that it, too, was human.

"We're here!" I roared, sarcastically, at my coffee-stained laptop. "We have arrived in the future! Kindly take up the slack of my floundering social life, dear computer, and engage me on topics both broad and deep!" The laptop responded by flashing a cursor and marginally increasing the speed of its fan. Business as usual.

The idea of the Turing test being passed holds great popular appeal. It was first proposed, fairly casually, by Alan Turing in 1950, while considering the question of whether computers could imitate human beings during typewritten exchanges. Less to do with artificial intelligence (AI) and more to do with human gullibility, it has nevertheless inspired countless competitions and research projects over the years, all of them imposing arbitrary ideas of what constitutes "success", and then excitedly crowing when said criteria appears to have been fulfilled.

Saturday's event at the Royal Society required 30 per cent of the judges to be hoodwinked over a conversation lasting five minutes – both figures extrapolated fairly tenuously from Turing's writings in the absence of any firm criteria set down by the man himself.

Consequently, the idea of any Turing test pass signalling a tipping point in AI seems to be faintly absurd; indeed, as Mike Masnick wrote on technology site Techdirt this week, many in the AI world "look on the Turing test as a needless distraction". You could argue that there have been far more significant dupes in the history of human-computer communication than the judging panel at the Royal Society: perhaps the many Russian men who, in 2007, were being fooled by a chatbot called Cyberlover into handing over credit card details and other personal information, albeit on the promise of saucy fun; perhaps the 59.3 per cent of participants at a 2011 event in Guwahati, India, who rated a bot called Cleverbot as human (humans themselves, rather intriguingly, were only rated as human by 63.3 per cent of them); or even those of us who use online help facilities provided by companies such as Lloyds or Citroën, that are powered by chatbots and are sometimes more helpful than the call centre staff that they replace.

These are all akin to Turing test passes, they all signal some kind of progression in the power of computing, but to pretend that a eureka moment exists when we're all supposed to throw our hats in the air feels somewhat bogus.

Over the past couple of days, that test at the Royal Society, organised by academics at the University of Reading, has been dissected online; many are contemptuous of the fact that the human being the chatbot was supposed to imitate was a 13-year-old boy whose first language wasn't English, but Ukrainian. "Come back to us when you've modelled it on an English-speaking adult", their comments seem to say, and no doubt the programmers will, once again claiming victory, and the news will be retweeted breathlessly while the real business of technological advancement carries on, quietly, with little fanfare and whizzing champagne corks noticeably absent.