One of the most intractable problems in the race to design computers with 'artificial intelligence' is knowing exactly what intelligence is, and how to recognise it. One of the first attempts to address this was embodied in the famous Turing test, devised in 1950 by Alan Turing, a brilliant Cambridge mathematician. A machine could be said to be intelligent, he said, if you could ask it questions on any subject via a computer terminal and not know from its answers whether it was a machine or a human being. To this day, nobody has been able to build a machine that can truly be said to have passed the Turing test. The mystery is whether they ever will.
'Will machines ever be more intelligent than humans?' asked Kevin Warwick, professor of cybernetics at Reading University, at last summer's meeting of the British Association for the Advancement of Science. 'The answer is clearly, yes.' Professor Warwick is the latest in a long line of prophets predicting man-made superintelligence. A more difficult question is when it will happen, although it will certainly be in the lifetime of our children, he said.
'Technically we know that certain aspects of human intelligence - memory, decision making, logic - can be artificially replicated. Indeed, the artificial form is much faster, more powerful, more flexible and more reliable . . . If machines can be made to be as intelligent as humans, then for sure they will be more intelligent because of their superior performance.'
Few researchers in the field of robotics and computers would agree. Lionel Tarassenko, a lecturer in information engineering at Oxford University, says that trying to emulate aspects of human behaviour - seeing, feeling, moving, speaking - cannot be equated with human intelligence. 'I'm concerned with building machines that can duplicate human behaviour, but whether that is a display of human intelligence or not is a philosophical question. I don't actually believe that if you can duplicate human behaviour you can encapsulate human intelligence or consciousness.'
Dr Tarassenko and others would like to abandon the notion of the Turing test as a definition of machine intelligence. His head of department, Professor Michael Brady, agrees. Inherent in the Turing test is an ability to reason or to think, he explains. 'I would suggest that this is not what intelligence evolved for. I think sensing and action - not thinking - are the very well-spring of intelligence. They are fundamental to finding food, a mate, avoiding threat - the very essence of being an animal and negotiating successfully through a cluttered world of objects. The Turing test puts reasoning - cognition - at the centre of what intelligence is all about, and relegates the ability to perceive or move purposefully.'
The classic example of a well-honed machine that gives a fair impression of cognition and intelligence is the chess computer. The best known is 'Deep Thought', created by researchers at Carnegie Mellon University in the US, which has even seen off the odd grand master. Chess computers, even the simple versions sold on a microchip, play at a surprisingly good level, Professor Brady says. 'But nobody would ever dream of saying they play chess in the way a human being does,' he adds.
Chess computers work by rapid computation, something the human mind does not do nearly so well. Instead, human chess players use their power of judgement, based on experience of knowing and recognising patterns of chess pieces on a board.
Clearly, then, a machine that can perform a good imitation of a seemingly intelligent human activity - such as playing chess - is nothing more than a good calculator. Chess computers need to follow relatively few rules, but perform a lot of memory searching, to take on good human players.
There is a trade-off between the number of rules and the degree of searching required, says David Murray, another member of the Oxford robotics team. 'If you have a machine that can do a lot of searching, you can get away with relatively few rules. If you have a system that cannot do much searching, you've got to have more rules for it to follow.'
This is a fundamental limitation of a branch of artificial intelligence known as 'expert systems' - computers that can store and manipulate knowledge rather like an automatic encyclopaedia. In professions where many facts are necessary - such as medicine or civil engineering - expert systems can help humans to form judgements. But expert systems are unlikely ever to get beyond being sophisticated aides- memoire. 'There are very difficult problems now in large expert systems,' Dr Murray says. But increasing the number of rules - in this case, the instructions for the computer to follow - causes a disproportionate increase in the time it takes to search the databank. 'You have got to think carefully about the knowledge/search trade-off,' emphasises Dr Murray.
The main emphasis of the Oxford group is on building robots that can see and move around in a cluttered environment - an aim that fits in well with Professor Brady's view that intelligence arises through interaction with the real world. There is an analogy here with the intelligence skills needed by simple living organisms that perform similar functions.
Dr Murray is using something he calls 'active vision'. He does it by building 'heads' which are able to direct their television camera 'eyes' towards an object in much the same way as we move our eyes to analyse objects in our field of view. 'We are eliminating irrelevant detail,' he says. Vision is becoming selective in order to reduce the enormous amount of computation needed to analyse everything in the wider environment.
Studying how living organisms see has helped robotics researchers devise their own vision systems. Predators such as owls and cats, for example, have good stereoscopic vision because their eyes point forward and provide the brain with two slightly different fields of view. Artificial stereo vision works on the same principle, Professor Brady explains. 'When you take those two images the key question is what do you match between the two images? That's quite a hard problem. The second is how do you match it. What is the process by which you match the things in the left image to the things in the right image?'
The problem is mathematical, and relies on being able to match the tremendous power of the brain to process images - and to process them continuously. Can the structure of the brain help the robotics researcher create intelligence? One difficulty is the enormous complexity of the organ, with its billions of connections between nerve cells. 'There is no doubt that the internal structure of computers is very different from the internal structure of the brain,' says Professor Brady. 'There is an argument that, because the two have such radically different structures, human-like behaviours like seeing, hearing or speaking are intrinsically more difficult with computers.'
Nevertheless, attempts are being made to emulate the workings of the brain in a branch of research known as neural networks, where electrical connections are wired together in a very crude version of the complex nerve networks in the brain. 'The idea is that you connect a large number of quite simple devices, rather than linking a small number of the powerful devices seen in modern computers,' Professor Brady says.
The principle of neural networks is quite old. In fact, the first scientific paper on them was published 50 years ago. But they were abandoned in the 1960s because computers were deemed incapable of learning how to implement certain mathematical functions (this has since been disproved). As a result, classical AI (artificial intelligence) research emerged, based on such principles as 'planning', 'reasoning' and 'knowledge'. Neural nets re- emerged in the 1980s as a vogue subject, helped by the enormous increase in computer power which has occurred over the past 30 years.
Lionel Tarassenko builds circuits based on neural networks to help the Oxford robots feel their way through their cluttered world. He stresses the relative crudity of these circuits compared to the living connections inside our heads. Real, living neural networks, for instance, are 'dynamic' in that they are working and processing all the time - unlike the artificial versions created in the laboratory.
'The robot knows four things,' Dr Tarassenko explains. 'It is told how far the nearest obstacle is straight ahead, and whether it is in the directional range plus 60 degrees or minus 60 degrees. Then it is told when a collision has occurred. The neural network learns to associate these sensory inputs with the correct drive for the motors so that it learns how not to bump into the walls and the obstacles.'
The robot, which has been built in collaboration with Sheffield University, learns by its mistakes to avoid collisions. Like the chess computer, it produces a decent impression of primitive intelligence. Yet it fails to fulfil three important principles formulated by artificial intelligence researchers: planning, knowledge and reasoning. Dr Tarassenko says: 'It doesn't plan, it reacts to collisions. It has no concept of what an obstacle is at all, and it doesn't reason about it because it proceeds by trial and error.'
So even something that can learn by its mistakes is still a long way from displaying the sort of limited intelligence possessed by a young child. 'Mathematics can help us define what we mean by learning. If it is coming up with a new behaviour - an interpolation or combination of things that have been seen before - then a neural network can produce that form of intelligence,' Dr Tarassenko says. 'On the other hand, if it is a completely new experience, quite beyond the realm of experience during training, then the answer is no better than random.' What may separate human intelligence from machine intelligence, he suggests, is the ability to extrapolate beyond the limits of experience.
Dr Murray puts it another way: 'I believe we will be able to make very happy robots that can go round sweeping the streets, or making beds, or doing the toast. I can believe all that because it is on the route from sensing, through perception, to action. What I do not know is whether we can build robots that have a sense of being - because, frankly, I don't know what this sense of being is inside me.'-
Hunting of the Snark
1943: Two American scientists, Warren McCullock and Walter Pitts, publish the first paper on neural networks - the idea of building electronic circuits on a similar plan to nerve connections in the brain, to compute simple mathematical functions.
1950: Alan Turing, a Cambridge mathematician, devises his Turing test of intelligence: 'I propose to consider the question 'Can a machine think?'. . . We may hope that machines will eventually compete with men in all purely intellectual fields.'
He later said: 'The original question, 'Can machines think?', I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have to be altered so much that one will be able to speak of machines thinking without expecting to be contradicted.'
1951: Marvin Minsky, a Harvard mathematician, builds the first 'neurocomputer', which he called the Snark. It functioned well technically, but it failed to carry out any interesting information-processing functions.
1958: Minsky moves to Massachusetts Institute of Technology and establishes a department with colleague John McCarthy that becomes the pre-eminent centre for research into artificial intelligence.
1960s: Research on neural networks becomes moribund as artificial intelligence comes into vogue.
Joseph Weizenbaum, a computer scientist at MIT, devises a program called Eliza to reproduce the conversational skills of a psychotherapist. However, the program merely used clever tricks to construct sentences rather than being a serious attempt at understanding meaning.
1967: 'Man has within a single generation found himself sharing the world with a strange new species: the computer . . . Neither history, nor philosophy, nor common sense will tell us how these machines will affect us, for they do not 'work' as did the machines of the industrial revolution.' Marvin Minsky, now guru on AI.
1969: Minksy and Seymour Papert discredit neural networks in their book Perceptrons. Research into neural computing ends temporarily.
1970s: Hans Moravec, a robotics researcher at Stanford University, attaches a mobile robot to a computer to negotiate a 30-metre space, avoiding objects. It failed.
1972: Sir James Lighthill, a mathematician, nearly kills off AI research in Britain in his review for the Government: 'In no part of the field have the discoveries made so far produced the major impact that was then promised (in the 1950s).'
1982: John Hopfield, a physicist at the California Institute of Technology, publishes research on associative memory using neural networks, which leads to a resurgence of interest in the subject.
1986: Renaissance of neural networks complete with the publication of books describing computer instructions that circumvent Minksy and Papert's objections.
1987: First international conference on neural computing in modern times forms the International Neural Network Society.