It's a no-brainer

Artificial Intelligence is 50 years old. So why are we yet to create anything worthy of the name? Danny Bradbury investigates.
Click to follow

For the most promising, radical and exciting branch of computer science, 2006 should have gone down as a landmark year. Robot butlers should have popped the corks of fine champagne, having first made their own informed decision as to which vintage was appropriate. It is 50 years since the Dartmouth Summer Research Project on Artificial Intelligence (AI), where researchers laid down the foundations for their hopes, dreams and plans to develop machines that would simulate every aspect of human intelligence. So why don't we have those robots?

At the recent anniversary conference, again at the New Hampshire's Dartmouth College and this time titled AI@50, the verdict was that, clearly, developers still have far to go.

"I had hoped that the original Dartmouth meeting would make a substantial dent in the problem of human-level AI, and it didn't," says John McCarthy, the organiser of the 1956 meeting, and a speaker at this year's event. "The main reason is that AI was a lot harder to develop than was anticipated."

By human-level AI, McCarthy means machines that really have a mind. Instead, most AI developments to date have focused on reproducing very narrow aspects of human intelligence; voice-recognition software uses AI to turn speech into text, but it cannot discuss what you're telling it. Some digital cameras use AI to steady an image in the viewfinder, but they can't tell you what the image was.

Chess has always been the holy grail for AI researchers, says Brock Brower, an AI author who was on the steering committee of the AI@50 conference. Chess, it is thought, has the right combination of human flair alongside serious number-crunching. IBM cracked that problem in May 1997 when Deep Blue beat the chess champion Gary Kasparov in a controversial match. "But the only thing it can do is play chess. It's an idiot savant," says Brower. "It was a stupendous computational victory over the clever chess player, but it doesn't have a mental state."

So can we recreate true understanding in a computer? Some hope to do it by simulating aspects of the brain, which is made up of a hugely complex series of nerve cells called neurons. Scientists have created simplistic neural networks that can "learn" responses to basic conditions, for applications such as visual recognition.

Today, computing power can produce more effective neural nets, argues Michele Bezzi, a researcher at Accenture Technology Labs. His research concentrates on making each neuron work more like a complex human cell and less like a binary switch.

Steven Furber, a professor at Manchester University's School of Computer Science, is going for volume with his £1m "brainbox" project. The idea is that large numbers of simple microprocessors can be linked like the networks of neurons. He is building a prototype network of one million neurons, which he says could scale to a much larger size if enough chips were available. By comparison, a bumblebee's brain contains roughly 850,000 neurons.

But this is just a small step. There are around 100 billion neurons in the human brain, warns Furber. "The large part of the problem is knowing what networks map on to it, and that is where there are still huge gaps in our knowledge," he says.

We simply don't know enough about how the brain is wired together to emulate it, even if we had the computing power. And it isn't just the mechanics of the brain that we fail to understand, says McCarthy. The process of thinking is also hard to analyse. "Humans are not very good right now at understanding intelligence," he says.

It is becoming clearer to some researchers that there are different aspects to intelligence, and that they do not all involve reasoning. "If you get as old as I am, at 75, you begin to think that your emotions are part of your intelligence," Brower muses. How do you build that into software?

Yesterday's software algorithms simply won't cut it, argues Sebastian Thrun, director of Stanford University's AI lab. He should know. Last October, the lab won $2m (£1m) from the US government's Defence Advanced Research Projects Agency (Darpa), for building a vehicle that drove 130 miles along desert roads - on its own, without a driver.

Stanford's car used probabilistic analysis to find its way around obstacles and keep to the road. With a pocket calculator, adding four and four together to produce eight is easy, because both the inputs and the outputs are certain, explains Thrun. With AI, the inputs are uncertain, making for outputs that are neither right nor wrong - they are simply more or less probable. "Take a robot that needs to judge where the road is. It can't tell you exactly where the road is. It has to guess," he says.

Stanford is entering the next Darpa grand challenge: to design a robot car that can drive through city streets, monitoring other traffic and navigating its way around road-blocks.

Like other AI systems, Stanford's car does not have human-level intelligence. But Saul Haydon Rowe, the senior vice president of knowledge management at Corpora, doesn't think this matters. Corpora develops software that reads documents and analyses their sentiment. It can help companies understand what the newspapers are saying about them, for example.

"We can solve a lot of the problems today with much simpler technology than people think," he says. "Let's do 80 per cent of what a fully conscious AI system will do, and we'll have a road map."

The 80 per cent figure may be a little high - we have further to travel down that particular road - but how will we know when we have arrived at our destination?

"It will be able to answer the questions that we are interested in asking it, about the consequences of various policies, either personal or national," says McCarthy. He is describing the test, proposed by Alan Turing in 1950, in which two contestants - a human and a computer - are hidden from a judge. Both attempt to convince the judge that they are human by conversing with him. If the judge cannot tell which is the computer, then the computer has passed the test.

The £50,000 Turing Prize will be awarded to the creator of the first machine to pass the test. For now, the program judged to be the most like a human is Joan, by the British programmer Rollo Carpenter. It won the Loebner Prize, £1,000, at University College London on Sunday.

But in his book Consciousness Explained, Daniel C Dennett argues that, even if the Turing test is a proof of intelligence, it cannot be a proof of consciousness in the machine. "In the end, that will be a conviction that men will achieve, not the machine," says Brower.

We will judge consciousness in machines as we do in each other. If it looks conscious, and acts conscious, then we will declare it to be so. Until then, we will inch along the road to pure artificial intelligence, guessing at our destination.

Five AI 'triumphs' that exist only in our minds

Fiction is full of artificially intelligent entities, from The Forbidden Planet's Robby the Robot through to The Matrix's Agent Smith. Here are a few other examples that show how far fact has fallen short of fiction.

1. HAL 9000 (1968)

In the film 2001: A Space Odyssey, Hal kills most of the crew after learning that they intend to disconnect it. The fact that HAL's letters are each one letter away from spelling IBM has been repeatedly dismissed as a coincidence. It comes from "heuristic algorithmic".

2. K-9 (1977)

K-9 is talking robotic dog on Dr Who, with a laser in his snout and a natty tartan collar. Any comparisons with Sony's robotic dog the Aibo are over-exaggerated.

3. Marvin the Paranoid Android (1978)

Created by Douglas Adams in The Hitchhiker's Guide to the Galaxy, Marvin is perpetually depressed, because he can never exploit his vast intellect.

4. Roy Batty (and friends) (1982)

Played by Rutger Hauer in the movie Blade Runner, Roy Batty is the violent leader of a group of replicants -- artificially created humans pitted in a battle of wits against the human replicant hunter Rick Deckard.

5. Skynet (1984, 1991, 2003)

In the Terminator series, Skynet is the computer system running the nuclear defence network. It becomes self-aware and tries to wipe out humans.