Almost Human

A new film suggests that robots could one way over take mankind. But the truth is that it may already have happened
Click to follow

Come on, you stupid machine!" It's a cry that will be heard in offices and homes round the country today as people sit in front of unresponsive computers; in that context, it might at most arouse a wry grin. But what if you were walking along and heard it, and turned round to find someone yelling it at a robot dog? More amusing? Or what if you turned round and found they were shouting at what looked like a child?

The scenario with a robot dog is entirely feasible today. Thousands of people have bought Sony's AIBO, the robotic animal that learns to like its owner and behaves in some ways like a real (if very small) dog; when you first start it up it can't even walk. But it rapidly learns and even exhibits "moods".

"[My Aibo] Alpha is definitely in the middle of her 'terrible twos' stage. She is often in a difficult mood... doesn't want to play with her ball, just wants to walk around by herself," noted John Lester, of the neurology department at Harvard University, who kept an online diary of his "dog's" development from October 1999.

Today, it's dogs. Surely the day is coming when we will see "children" on the streets who are actually machines. That is the premise of A.I. Artificial Intelligence, the new film released next week. Produced by Steven Spielberg (who took it over from Stanley Kubrick after the latter's death) it is based on Brian Aldiss's science fiction trilogy of short stories Supertoys, and tells the tale of a boy (David) who is really a robot, mothered by a real human (Monica). David is not the first boy robot, but he is the first who is programmed to love. The world he exists in is full of other robots, some simply there to serve, but all sentient.

To scientists, the film's premise sets off a firecracker-like series of questions: what is love? What is emotion? What is a mind? What is consciousness, and how do you get it? Is a machine that shows human emotions a human? And can machines ever be either intelligent or emotional?

Alan Bundy agrees that the events set out in A.I. Artificial Intelligence are not far off. But as professor of artificial intelligence at Edinburgh University, recognised as Europe's foremost centre for AI studies, he doesn't think he's going to be out of a job any time soon. "It's centuries away," he said confidently yesterday. "There are so many conceptual problems: we are used to thinking that computer technology advances at a rapid pace, but in fact understanding 'consciousness' requires advances in the underpinnings of science. And that will move at the same pace as other sciences."

As a science, AI is comparatively young; it only acquired its name in 1956 when the American scientist John McCarthy convened a conference in Dartmouth College, New Hampshire to discuss what machines could and might do in the future. From there emerged the various strands of thinking about creating artificial minds.

You might think that nothing much has happened since then: after all, robot dogs are newfangled things, and you still can't buy a robot butler that won't suck the cat into the vacuum cleaner while you're out. But you'd be wrong.

While you weren't looking, computers have started making all the difficult decisions for you, and for other people such as bank managers, engineers and share dealers. Want a mortgage? A computer will assess your credibility based on your past transactions. Want a car, house, life insurance? The person you speak to on the phone is simply feeding your details into the maw of the computer. Bought something on your credit card? A "neural network" program at the card company's offices compares when, where and what you bought against its own "knowledge" of how stolen credit cards are used; if it seemed wrong, you'll get a call asking if you still have your card.

Computers run your car's engine, and can navigate you from A to Z if you get an onboard system. Apart from a few moments at take-off and landing, aircraft are flown almost entirely by computers. Stock markets are moved up and down by the automated selling of millions of shares and bonds when the indices move up and down; human traders see less and less of the real action.

Nowadays, the staff in call centres are not chosen for their technical expertise; they don't have any – it all sits inside the computer. They are hired to answer the telephone because humans prefer dealing with humans. The computers could pass on just as much knowledge, using techniques like speech recognition and generated voices. It's just that we prefer the way humans act.

But is any of what the machines are doing "intelligent"? Many people think not. But as Professor Bundy points out, we keep moving the goalposts. "It's always the case: as soon as we can implement something human-like on a computer, it ceases to be mysterious. People think that intelligence is unknowable. But really it's a collection of abilities. Over time, we will learn to respect machines and what they can do more and more."

Igor Aleksander, professor of neural systems engineering at Imperial College in London, thinks that the key to making computers more like humans is for them to incorporate emotions. "There are actually five elements which are required, from an engineering point of view, for a machine to be conscious. First is perception: it has to know that it exists in a world as a separate entity. Second, it must have imagination so that it can look at what happened in the past and project forwards to what might happen in the future. Third, it must be able to focus its attention on important inputs, while it is being bombarded with data from the world. Fourth, it has to be able to plan. And fifth, it needs emotions, because to an engineer, emotions are actually a means of evaluation, of weighing up different plans. If a human is planning to walk down a steep slope towards a sheer drop, they will feel their muscles tighten and the hair on their neck rise. Your imagination tells you that there is danger, but it's the emotional reaction which helps you decide."

Paradoxically, it's that emotional side which humans seem to be hardwired for. Babies are quick to let you know when they're unhappy; logic and planning follow much later. By contrast, robots and computers are good at telling you what you've done wrong, but emotion is a stranger.

So perhaps the key to making computers more useful is for them to show emotions. Would our computers be more useful if they were anxious that we might reboot them if they misbehave? Could they be made to feel satisfied if they helped us meet work deadlines, and thus become more helpful to us?

Those robots are definitely coming. Last week Sony launched two more "robo pups" able to recognise 75 voice commands and costing half of AIBO's $1,500 (about £1,000) price tag. And they're more realistic than AIBO. Sony has also built a small humanoid robot (which looks more like a metal astronaut) which can climb stairs – a neat trick when you consider that from birth it takes humans more than a year on average to learn the same technique, despite millions of years of bipedal existence built into our genes.

The market for such "companions" is also booming: robodogs don't make messes and if you go away unexpectedly for the weekend all they need is a power point. To today's teenagers who were raised on Tamagotchi, the handheld electronic "pets" which needed constant attention, the idea of a full-scale electric sheep might not seem so fanciful. And as children's dolls become more lifelike, is it such a huge step to "owning" robot children?

Professor Aleksander thinks the reality will be less dramatic: "I think we will see robots with primitive intelligence in ten to 15 years," he says. "By the end of the century we will have machines around that do have basic emotions, an embryonic form of consciousness. But you know what? It won't be great news."

Some might shiver at the idea. Yet our concerns about AI, and the spectre of computers that will be smarter than we are, probably indicate more about our own worries about identity in the modern world than any real technological progress. We lurch from month to month between new scientific anxieties: one week it seems the world is obsessed with how soon humans are going to be cloned endlessly, so that we will bump into perfect copies of ourselves in the street; the next it is whether smart robots will leave us in the dust, or perhaps grind us into it. The truth, as ever, seems more prosaic: they will infect our life, and we will accept them, as we do the machines which give us money, turn us down for credit and run our cars. So which is now the more intelligent – the servant or the master?

Comments