Not even chatbots want to talk about the weather

Rhodri Marsden tried to make casual chat with a bot known as Ultra Hal

Click to follow
The Independent Tech

Yesterday afternoon, I ran out of people to complain to about the weather, so I leapt into the unwelcoming arms of artificial intelligence. As London reached the temperature and consistency of a recently microwaved lasagne, I tried to make casual chat with Cleverbot, an online chatbot that's been around for a few years.

"Man alive, it's hot," I typed. "Do you believe in colours?" it replied. Ugh. I wasn't there to talk colours – I wanted to indulge in light chat about sultry weather. Desperate, I visited zabaware.com, the home of a bot known as Ultra Hal. "Hal can discuss any topic and learn and evolve from your conversations," says the blurb. "Man alive, it's hot," I typed. "What would it take to get you to reconsider?" asked Hal. I closed the browser window and went for a lie down.

We're often told that we're marching with great speed towards the Singularity, the point where artificial intelligence (AI) usurps our own. While various Turing tests (computers managing to convince humans that they're human) have supposedly been passed, they usually involve bending the rules and I remain sceptical that we're anywhere near the point where I might be caught out. That scepticism is hardened by chatbots who fail to learn from their mistakes and offer a choice of placid agreement or rambling non sequitur. One YouTube video of a conversation between two chatbots proceeds thus: "What do you want to talk about?" / "In the lingo of the economist the ten commandments talk about property rights." That's not a conversation I want to eavesdrop upon.

But interesting work is being done. Some researchers at the LSE recently conducted experiments into "echoborgs" – getting humans to deliver AI responses to other humans – to see if giving chatbots human faces made them seem more "real" to us. They reckon that it does. And various kinds of human bridge between us and AI are now being used in a number of services, mostly SMS-based, mostly in the USA. Cloe is on hand to help you find local restaurants. Jarvis assists you with scheduling meetings. Riley finds you apartments. These are all powered by machines that get to know you and your behaviour, but have a human front-end to ensure that the messages don't make you roll your eyes in frustration.

It almost seems like a hark back to AQA, the SMS-powered service staffed by humans (but powered by search engines) that gives answers to questions in return for £2.50 – but this new breed of service is more about building relationships. And, weirdly, we seem to be keen on them. Invisible Girlfriend, and its counterpart, Invisible Boyfriend, are staffed by a team of a few hundred people offering "meaningful conversations" while also providing "real-world proof that you're in a relationship". Initially conceived as chatbots, it became clear to its founder that the technology wasn't up to the task, so humans were recruited instead.

Steve Rousseau, an editor at digg.com, recently wrote a piece explaining how he tried Invisible Girlfriend for a joke, but ultimately ended up finding some small amount of meaning therein: "a text from a stranger, comforting another stranger." And how strange it is that despite all the connections that social media facilitates, we can find ourselves looking to strangers sitting at computers to reassure us that someone is out there. Even if it's only to talk about the weather.

Comments