Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Charles Arthur On Technology

A British programmer claims to have invented a true artificial intelligence. The Net isn't so sure...

Wednesday 14 April 2004 00:00 BST
Comments

When Google announced its prospective e-mail service a couple of weeks ago (unwisely using the name "Gmail" without having trademarked it), a number of commentators expressed fears about the privacy aspects of having a big company scanning your e-mail for key words against which it could sell ads.

To be honest, I don't understand the concern. For one thing, it's obvious when you sign up that your e-mails are going to be scanned. For another, it's being scanned by a machine, not a human. (Be honest - you didn't think Google had hired every spare programmer to hack out the answers to your personal web query, did you?)

To trump it all, the reality is that wherever you go on the web today, and wherever you've been in the past, if you've made any sort of comment, or even just been commented on in a way that can be identified by others, you'll have been sucked into Google's huge archive of what goes on around the internet. What's more, other people will be able to find out what you've done.

This was illustrated with some force a few weeks ago when a story appeared in New Scientist magazine. It reported that a British programmer, Jim Wightman, had come up with a revolutionary artificial intelligence program that he said would create "bots" that could strike up conversations in chatrooms and spot would-be paedophiles, who would then be reported back to him, who could pass the details on to the police.

The story was widely picked up by other news outlets. (The Independent did not run it.) But within hours of its appearance, folk around the net began stroking their chins and saying "Just a minute..."

Looking at the transcript of a "conversation" in New Scientist, allegedly between one of the "bots" and a chatroom user, many people began to question how a previously unknown programmer could have created what looked to be the biggest leap in artificial intelligence in many decades.

They then began hitting Google for more about the unknown Wightman. He didn't stay unknown for long. The site he had set up to publicise his "bots" provided raw material (an e-mail address, plus his work address via the site's registration details) that could be used to scour Usenet, the thousands of newsgroups, for more details about his past postings. (Google has the complete archive of Usenet postings, going back to the internet's year zero. If you don't want your postings to appear there, you have to put "X-No-Archive" in the headers of your news postings.)

They trawled Google for any hints of things he might have done in the past. This turned up the occasional angry exchange in various newsgroups; a few annoyed exchanges in specialist discussion boards; and a host of schemes that Wightman had been involved with at one time or another.

It also turned up examples of claims he'd made that hadn't been supported by later evidence. Like it or not, Wightman's footprints - or perhaps that should be fingerprints - were all over the web and Usenet. Unfortunately for him, and for New Scientist's story - which has increasingly come to look like a piece of over-eager reporting of a single-sourced claim - there was absolutely no evidence to suggest he had the sort of skill in artificial intelligence constructs that would enable him to create a world-beating artificial intelligence program able to create the question "Did you watch Robocop last night", and on getting the reply "What side was it on?", answer "Sky One".

Think a little about what that discourse involves. "Side" has a special meaning only in the context of television; to understand that it's equivalent to "channel" is something we learn only with experience. For a computer, that's a difficult link to make.

As more analysis came through, blog began to link to blog about it, until there was a whole web of information - and doubt - about the existence of Wightman's proclaimed "bots".

I did contact him myself on a number of occasions to try to get to the bottom of the claims and counterclaims. He insists that the bots exist, and that the AI program has been entered on his behalf by a company - which he says he cannot name - for the Loebner Prize, in which AI programs try to fool a human panel that they are human. This test, proposed by the father of modern computing, Alan Turing, assesses if a machine can really think. Wightman insists he will win. (Entries close 1 August, and the contest begins two weeks later. My diary is marked.)

My point isn't to declare whether or not I think Wightman's claims are true. Interestingly, New Scientist seems to be having second thoughts: its website now says "Serious doubts have been brought to our attention about this story. Consequently, we have removed it while we investigate its veracity." I'd urge you to make your own decision: search Google, and follow the Waxy.org link, which offers the widest collection of URLs to information that could help you decide.

My point is that this episode demonstrates what's really going on with the net. While everyone is worrying about Google acting like Big Brother, they're ignoring the fact that it has democratised Big Brother and made it available to anyone. Imagine the telescreens in 1984 being able to see what anyone else was doing: the mendacious society depicted by Orwell couldn't have continued.

What Google and the other search engines do is like the Victorian concept of the panopticon, the prison in which every prisoner can be seen from a single place. But our existence now differs from both those concepts because we can use Google to watch each other. We are all Big Brother. The only secrets that remain are those that aren't yet on the web - and that's a pool of knowledge that is shrinking daily.

network@independent.co.uk

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in