One day, you may have to switch your brain on

Neural networks are the nearest thing to artificial intelligence. In future, they may be as smart as we are.

Paul Rodgers
Sunday 17 September 1995 23:02 BST
Comments

The Russian rocket that blasted off to resupply the Mir V space station late last month carried an unusual bit of British kit: a computerised nose. The award-winning AromaScan device is to monitor air quality in orbit during a four-month trial. It will sniff out volatile gases escaping from experiments, and the tell-tale whiffs of melting plastic that are the first sign of overheating. It could become a routine but vital piece of space hardware.

AromaScan's maker, the Crewe-based company with the same name, which was spun off from the University of Manchester Institute of Science and Technology, owes much to the development of polymers that twist and emit tiny electrical bursts when they contact the chemicals that make up smells. But beneath that technology, another is sweeping through applications from finance to health care: neural networks, billed by some as the basis for artificial intelligence.

The exact combination of chemicals that make up an odour vary from sniff to sniff, so it is difficult to lay down rules in advance saying which are roses and which are burning insulation. Neural networks are ideal for solving that kind of problem because they "think" like real brains. A test sample may not be a perfect match, but a neural network can still identify it. They can also generalise from examples, allowing them to identify previously unseen samples just as a human would recognise the letter "A" even in an unfamiliar typeface. And they can spot subtle patterns among enormous amounts of otherwise irrelevant information.

Mimicking brains is notoriously difficult. Humans have 100 billion brain cells, and even insects have a hundred thousand or so. A typical PC has one main processing chip. While brains manipulate mountains of sensory data effortlessly, their electronic rivals must work through a list of instructions - the program - one at a time. That they work so quickly is a tribute to the simplification of their tasks.

Neural networks are different. Like real brains, they have many cells and even more interconnections. Most amazingly, they do not need programs at all, but can learn by trial and error. That means humans no longer have to simplify their assignments. At the University of Reading, cyberneticists led by Professor Kevin Warwick have built three-wheeled robots with neural networks that learn to avoid obstacles without being told to.

Neural networks come in almost as many varieties as Heinz soups, but with names that only mathematicians could love. The basic model, called a Multi-Perceptron, was developed at the University of California a decade ago. It has three layers - inputs, processors and outputs. Data from each of the inputs goes to all the processors, and from each of the processors to all the outputs.

All things being equal, the results from each output would be the same. But they are not equal. The processors multiply the data from each input with a number called a weight. Initially, the weights are randomly generated, but they are then modified depending on how close a processor is to getting the correct result.

If a commodity broker wanted to predict the price of pork bellies, it might, for example, use as inputs the price of rice in China, average temperatures in the American Midwest, sterling exchange rates and the cost of farmland in France, all collected from databases. Each set of inputs would be for a particular day, and the outputs would be compared with the actual pork belly price for that date. Processors that came close to calculating the right price would be reinforced, while others would be punished. After thousands of repetitions on thousands of sets of data, the system would (assuming there really is a correlation between our input factors and pork belly prices) learn to make accurate predictions.

It is a bit like giving the computer gold stars for right answers and red crosses for wrong ones. Over time, like a child, it learns.

The distribution of the stars and crosses makes a big difference. In a Kohonen Network, for instance, the processor that comes up with the highest value for a particular set of data gets all the rewards, while the other processors are punished. By the time it is fully trained, the computer will strongly associate each set of data with one particular processor.

Kohonen Networks are one of three techniques used by Birmingham-based Recognition Systems to sort through masses of data on consumers, grouping them in ways to help their clients to predict future buying behaviour. Paul Gregory, the company's managing director, says the results can be used to focus the client's marketing efforts on swing shoppers.

Other applications could even save lives. Nicholas Walker, the deputy group manager of systems and applied sciences at CRT, a subsidiary of Thorn EMI, has proposed using them to pre-screen potential patients for illnesses. Screening tests abound, but in most cases, the disease is so rare that testing everyone would be prohibitively expensive. Given thousands of case histories to examine, neural networks should be able to pick out the people most in danger and send them for further laboratory testing. The approach could work well on rare kidney disorders, but will probably be tested first on skin cancer, if funding is approved. An added bonus is that risk factors too subtle for humans to notice could be revealed.

Then there are financial markets. Neural networks are big in the City, where their ability to predict market movements is prized. There is still some resistance, however, to committing large sums of money based on figures coming out of a "black box". Because they have no program, it is impossible to check the work of a neural network for mistakes.

Neural networks have other practical problems, too. Although each processor is far simpler than those in a PC, the connections can be immensely more complex. A network consisting of 10 processors and an equal number of inputs and outputs would have 200 connections. But increase the number of processors to 50 and the number of links soars to 5,000. So rather than build the actual hardware, neural network designers prefer to create simulations that run on conventional step-by-step computers.

Still, neural networks are likely to become more common. A study last year by Frost & Sullivan, an American market research firm, suggested that the market would expand by a compound 48 per cent a year to 1998. Looking further into the future, some visionaries suggest that neural networks may one day be our equals.

Predictions by computer scientists that artificially intelligent computers are just around the corner are as old as the speciality - about 50 years. Neural networks are the closest the boffins have come to fulfilling those forecasts. Some people, including AromaScan's inventor, Krishna Persaud, think the technology holds the seed of true artificial intelligence. Others argue that they are just a new statistical tool. The jury is still out.

Overleaf: the neural network as murderer. Could it happen?

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in