Find by writer
- Yasmin Alibhai-Brown
- Rebecca Armstrong
- Memphis Barker
- Max Benwell
- Chris Blackhurst
- Ian Burrell
- Andrew Buncombe
- Ben Chu
- Patrick Cockburn
- Mary Dejevsky
- Grace Dent
- Robert Fisk
- Andrew Grice
- Stefano Hatfield
- Lucy Hunter Johnston
- Howard Jacobson
- Alice Jones
- Ellen E Jones
- Simon Kelner
- Lisa Markwell
- Michael McCarthy
- Hamish McRae
- Jane Merrick
- James Moore
- Matthew Norman
- Dom Joly
- Amol Rajan
- IV Drip
- Our Voices
- Yasmin Alibhai-Brown
- Terence Blacker
- Simon Carr
- Rupert Cornwell
- Sloane Crosley
- Mary Dejevsky
- Robert Fisk
- Andrew Grice
- Adrian Hamilton
- Philip Hensher
- Howard Jacobson
- Dominic Lawson
- John Lichfield
- Hamish McRae
- Matthew Norman
- Christina Patterson
- John Rentoul
- Democracy 2015
- IV Drip Archive
- Scottish independence
- Save the tiger
- The state of the NHS
- Find by writer
- Arts + Ents
Tuesday 13 May 1997
Man v machine
Does the defeat of Kasparov by the Deep Blue computer mean that humans are no longer the only possessors of true intelligence?
Well, what is certainly true is that today's chess-playing computers do not play the game in remotely the same fashion as do their human adversaries. Deep Blue, it is said, can examine 200 million distinct states of the board in a single second, whereas a human chess-player can only examine, perhaps, two such states. But then most of the computer's labour would, from the perspective of an experienced human player, be so much wasted effort: a matter of pursuing the possible consequences of moves that the human player would rightly dismiss out of hand.
Pattern recognition plays a crucial role in human chess-playing, but is largely lacking in computer chess programes. Human players see positions on the board as relevantly similar to those they have encountered previously, but they would be hard put to say in what precise respect the current and the remembered positions resemble each other; this makes it difficult to program such knowledge into a computer.
But what Deep Blue lacks on the pattern recognition side, it more than makes up for in sheer speed. So it is with much of today's so-called artificial intelligence. It's not so much artificial intelligence, in our sense of the term, as incredibly rapid "artificial stupidity", where exhaustive and undiscriminating searches produce results we would achieve, if at all, only by highly selective searches guided by insight.
However, one shouldn't allow such considerations to make us too complacent about the claims of artificial intelligence. First, huge strides have already been made, and will doubtless continue to be made, in the field of pattern recognition, by so-called neural networks. A neural network (which normally exists only as a simulation on a conventional computer) can be thought of as a vast array of very simple processors, analogous to neurons in the brain, connected up in such a way as to enable the system to learn various prescribed tasks (where performing the task means producing certain outputs in response to certain inputs).
Information about the appropriateness of the system's outputs is repeatedly fed back into the system, and causes the strength of the connections between the processors to be adjusted so as to improve performance. This technology is likely, in due course, to make it possible to devise chess programs that play in a far more human fashion than Deep Blue, and which are capable, moreover, of learning from their mistakes.
Beyond that, there are some powerful theoretical arguments, deriving from the work of Alan Turing in the 1930s, which suggest that, in principle, the cognitive powers of the human mind could be matched by any suitably programmed conventional computer with sufficient memory and speed of operation. Modern computers (apart from their limited memory) are implementations of what is known as a universal Turing machine.
A Turing machine is an imaginary device (incorporating a reading, erasing and printing head which operates on a moving paper tape) which was invented by Turing in order to give a precise meaning to the concept of performing some cognitive task mechanically - multiplying two multi-digit numbers together would be an example of such a mechanical task.
Different Turing machines, as originally conceived, are designed to perform different tasks. But Turing showed that you could build a universal Turing machine which, given (on its tape) a description of any particular Turing machine, could then replicate the behaviour of that machine And this, in essence, is what a modern, general-purpose computer is designed to do: programming a modern computer is, in effect, a matter of instructing it to behave like a particular Turing machine.
Now we shouldn't ordinarily think of our own cognitive activity as purely mechanical. To be sure, we spend much of each day engaged in routine tasks which call for little or no creative thought (if, indeed, they call for any thought at all). But we also do other things, such as composing a letter to a friend, which do seem to us to involve creativity. And, indeed, it is true of most classes of mathematical problems that there is no general automatic prescription for solving them. To that extent, doing mathematics, like playing chess, is itself, in general, a creative activity. But the fact that a person writing a letter to a friend, or a mathematician trying to prove some theorem, isn't operating according to conscious rules, doesn't exclude there being, at some level, rules at work governing the relevant thought processes: rules, moreover, which could in principle be programmed into a computer.
Evidence, after all, suggests that all mental activity is a manifestation of the workings of the brain. And the brain, being a material object, is presumably subject to the self-same laws of physics that govern matter elsewhere. These laws themselves appear to be such that the behaviour of anything which obeyed them could in principle be simulated by a universal Turing machine; ie by a suitably programmed computer.
Those who are impressed by this line of argument confidently expect that it will eventually be possible to program computers in such a way that they can pass themselves off as human beings in conversation. Turing himself proposed this, in 1950, as the acid test of whether a computer could think. He imagined a human being and a computer engaged in an "imitation game" with a human interrogator, whose task was to try to tell, on the basis of their answers to his questions, which was the human being and which was the computer. The computer would be programmed to answer the questions in as human a manner as possible, while the actual human being would try to persuade the interrogator that he or she was the real human being.
Turing argued that a computer which was capable of fooling such interrogators at least 50 per cent of the time should be regarded, not only as engaged in successful simulation of thought, but to be genuinely thinking. (We could imagine a similar set-up involving chess, with a human player simultaneously playing, via some remote link, a human player and a computer, and trying to guess which was which. Programming a computer to win a chess version of Turing's imitation game would clearly be a different matter from programming it merely to beat the human chess "interrogator" at chess: it would have to play like a human being, right down to making the sorts of mistakes a human would make.)
This Turing test has been enthusiastically embraced, by many contemporary workers in the field of artificial intelligence, as a test not merely of whether a computer is genuinely thinking - whatever that means - but of whether it is conscious. Indeed, some of Turing's remarks seem to imply that he himself regarded his test in this way.
The Turing test, thus interpreted, raises two questions which must be distinguished from each other. First, will it ever be possible to programme a computer to pass the Turing test? People who answer "yes" to this question are said to believe in "weak AI" ("AI" meaning artificial intelligence). Second, if a computer could be constructed and/or programmed to pass the Turing test on a regular basis, at least as often as the average human being would, should it be credited with consciousness? People who believe in weak AI and answer "yes" to this second question are said to believe in "strong AI".
Let us suppose that weak AI is true, and that in the fullness of time experts in artificial intelligence succeed in programming computers (operating on essentially the same principles as current ones) reliably to pass the Turing test. Should we then conclude, in accordance with strong AI, that the computers are conscious, having "inner lives" comparable to our own? I think not.
Consciousness, as I see it, is a great mystery; nothing in our current understanding provides the smallest clue as to what it is, in physical terms, or why it should exist at all. But I take it that it is a biological phenomenon which evolved in response to various adaptive pressures: thus regarded, it is there only because it produces behaviour which conduces to the survival of our genes. Consciousness was nature's solution to certain problems of adaptation. But what nature had to work with, in solving this problem, is very different from what we have to work with.
Think of nature as under pressure to engender, in animals, dispositions to produce certain sorts of behaviour in response to various sorts of stimuli. From the fact that nature produced the desired relationship between sensory input and behavioural output by creating consciousness, it doesn't follow that we, with our technology, cannot produce this relationship without creating consciousness. Baldly put, perhaps nature wouldn't have needed to produce consciousness, if she had had etched silicon to work with, rather than organic carbon.
Finally, wouldn't it be better, on the whole, if strong AI were false, always assuming that we could be sure? "Intelligent" computers would be much more useful to us if we could confidently treat them as mechanical slaves, rather than as sensitive beings with rights that we were morally obliged to respect. But if we are one day faced with computers that can pass the Turing test, and we remain unsure whether they are conscious or not, one might plausibly argue that we should give them the benefit of the doubt!
Michael Lockwood is a lecturer in philosophy at Oxford University. He is the author of 'Mind, Brain and the Quantum' (Blackwell, 1989).
William Hartston analyses the final two games between Kasparov and Deep Blue in The Tabloid, page 14.
Stephen Fry explains what he would say if he was 'confronted by God'
Angela Merkel rules out Greece debt cuts as Syriza refuses to cooperate with troika
Podemos March for Change: Huge crowds rally in Madrid to support Spanish leftist party
Benedict Cumberbatch backs open letter to pardon 49,000 men convicted of being gay
End the culture of blaming rape victims, says Harriet Harman
£15000 - £30000 per annum: Recruitment Genius: Now our rapidly expanding and A...
£15000 - £20000 per annum: Recruitment Genius: Are you passionate about great ...
£20000 - £30000 per annum: Recruitment Genius: This fast growing reinforcing s...
£18000 - £35000 per annum: Recruitment Genius: This fast growing reinforcing s...