Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

How close are we to creating HAL on Earth?

Charles Arthur Science Editor
Sunday 22 December 1996 00:02 GMT
Comments

Happy birthday, HAL. Scientists around the world are preparing to celebrate the "birth" of the world's most famous fictional computer who, in 2001: A Space Odyssey, was first switched on in January 1997.

According to Arthur C Clarke's novel, the actual date is 12 January. Among those celebrating the most terrifying vision of artificial intelligence so far created is the University of Illinois. Its staff are marking the date with a Cyberfest, a vast Internet party.

Clarke's tale recounts that fours years after he became operational, HAL finds a problem with the communications link between the space ship Discovery and Earth. The astronauts, Dave and Frank, can find nothing wrong with the link. They decide the fault lies with HAL and plan to close him down.

HAL has other ideas: Frank is killed during a spacewalk; Dave is trapped outside the ship. Dave demands to be let back in. "I'm sorry, Dave, I'm afraid I can't do that," is HAL's chilling response. The computer's motives are not selfish - he knows that without him the Discovery cannot complete its mission of reaching Jupiter.

Dave manages to get inside the emergency airlock and disconnects HAL's higher functions. The computer's fears are well-founded: Dave is left alone and the ship never returns to Earth.

HAL's birthday has inspired scientists to examine how close we are to being able to build a computer like him. In a new book, HAL's Legacy: 2001's Computer as Dream and Reality, a team of computer specialistsanalyse the few clues that we have from the 1968 film.

Clarke and Stanley Kubrick, who directed the film, devised a computer which can control a large number of independent electronic and mechanical systems; hear and understand speech; see and recognise still and moving objects, culminating in the ability to lip-read; respond to situations with speech or actions; make plans; play games (such as chess); express emotions such as fear and worry.

How close are we to having computers which can do those things? In some categories, extremely close. We have chess programs which can beat all but the very best players. There are speech recognition programs which can understand and decode thousands of words, even when spoken under stress, and vision analysis systems which can identify the edges of static and moving objects. We even have programs which can simulate emotions - primarily paranoia. None, though, approaches human ability.

Despite the progress of the past 30 years, we still aren't anywhere near building a computer which provides a gestalt of all these functions. We have computers which can produce five-day weather forecasts and simulations of airflow over aircraft wings by unbelievable number-crunching; we have computers which can fool people into thinking that they're dealing with a human psychologist; we have computers that can point out conflicts in complex projects.

But there is something about being a human, or being able to do human- like processes which involve talking, interpreting, planning, foreseeing problems, and containing and presenting all that within a single machine, which defeats us.

Will that always be true? Roger Schank, former professor of computer science and psychology at Yale University, and director of the Yale Artificial Intelligence Project, notes in HAL's Legacy: "That is the bad news. - HAL could never exist." His conclusion is based on the famous response to Dave's request that the doors be opened. "I'm sorry, Dave, I'm afraid I can't do that."

Schank comments: "This conversation sounds all too human. It's about goal conflict." Dave wants to get in and disconnect HAL (or at least, that is what HAL has concluded); HAL wants to stay connected. The two situations cannot coexist peacefully.

But how, adds Schank, do we create a machine that when asked to do something says "I'm afraid I can't do that" when what it means is "I won't"? Such a response suggests that it has already experienced "goal conflict" and learnt to be polite when it happens. What computer has been through that?

Instead, says Schank, computer research is moving towards machines that are "local experts": "They will know a great deal about what they are supposed to know about and miserably little about anything else."

The trouble with computers is that it's very hard to teach them about a world that is full of exceptions and hidden inclusions.

Yet optimists among artifical intelligence workers feel that the advances being made in neural networks (which mimic our biological synapses) mean that intelligent, conscious computers will exist in the next century. Already, neural networks have been built which show the learning ability of a worm. It may not be HAL - but it's better than the average light switch.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in