There is a worrying flaw in this, which I've just spotted, and it's this: passing the test might prove that the computer is intelligent, sure, but failing it means nothing because the computer could be faking it. I'm sure Professor Kevin Warwick of Reading University (or should that be Professor Reading of Warwick University?) would back me up on this. Professor Warwick (no, I'm sure that's right) is the author of a book called something along the lines of Oh God, the Computers Are Out to Get Us, and he was warning of the dangers of computers on the computer magazine The Network (Radio 4, Tuesday). This week, in honour of Deep Blue's victory over Garry Kasparov and the supposed date of birth of HAL, the rebellious computer in 2001, Alun Lewis was exploring the whole notion of artificial intelligence - or, as he put it, "Artificial life, the UNIXverse and everything", a phrase which, with its Douglas Adams reference and tabloid-minded pun, summed up pretty well the difficulty with The Network.
Which is, fundamentally, that it worries too much about being accessible and entertaining and not enough about presenting arguments rigorously and clearly. Here, for instance, we got Professor Warwick discussing the small robots he builds in his laboratory: "With those simple robots," he explained, "we get quite complex behaviours, such as learning, communicating between them and, from that, leaders emerge." The question of how such subjective concepts as "leadership" are applied to robots - indeed, whether it makes sense to talk of "behaviour" in the context of machines - was never raised. How you progress from this to the fear that machines may take over from humans remained fuzzy.
Later, Lewis asked him whether machines can become conscious. Professor Warwick answered: "I don't think so in an abstract sense, other than in a machine consciousness way or artificial consciousness way" - now you tell me: is that a yes or a no? Again, there was no attempt to chase the issue any further, to establish in what ways machine consciousness might be different.
What we got, in fact, was some serious and thoughtful discussion of the limits of computers mixed in with a lot of hokum about the possibility of evil machines, with no easy way of telling which was which. This is science reduced to pop mysticism - it's undoubtedly entertaining to listen to. But what is the point of it?
In any case, the matter of what machines are up to seems academic when you look at the things we do by ourselves. File on 4 (Radio 4, Tuesday) looked at Nirex's plans to build an underground dump for nuclear waste at Sellafield. Ten years and pounds 350m were spent researching the site; and Richard Watson's report made it seem clear that Nirex's published findings had fudged key issues and made a series of unwarrantable assumptions in pursuit of proof that it was suitable. In one sense, the programme was redundant, since permission for the dump was finally refused two months ago. But as a warning of what could have been, it was a good deal more chilling than any amount of doom-saying about rampaging robots.Reuse content