Science: Where computers prove incapable - Fermat's Last Theorem has been cracked, but Darrel Ince explains how researchers have shown that some problems cannot be solved

Earlier this year, the news came through that Professor Andrew Wiles of Princeton University had succeeded in proving Fermat's Last Theorem - a puzzle that had lain unresolved for three centuries.

Proving mathematical statements to be true (or false) is one of the activities mathematicians indulge in most frequently. Two features of the process often surprise the layman. First, how error-prone the business of proving things is. There are plenty of cases of a seemingly straightforward proof being published, only for it to be modified or even proved false a few months later. The second surprising fact about proofs is that they can be very long. The proof of Fermat's theorem should, in full, occupy hundreds of pages of text.

Now research scientists at the University of California, Stanford University and AT & T laboratories have startled the mathematical world by developing a technique capable of proving - or at least certificating - the correctness of a proof. Of obvious importance to mathematicians, this also has major ramifications for computer scientists.

In effect, the researchers have devised the mathematical version of ultrasonic testing, which highlights any flaws in a proof. A series of operations - transformations - are applied to the text of a proof and any errors become more visible.

There are two implications for computer scientists. The first concerns 'formal methods of software development'. This is the generic term used to describe methods that use mathematics both to draw up the specifications for what a piece of software should achieve and to design the software systems themselves to meet the specifications. Such methods are normally used to develop programs for ultra-reliable applications such as the control of nuclear power stations.

A major problem that has dogged this area is that the proof process is long and error-prone. One incorrect step could lead to a serious error - which could result in loss of life. Use of formal methods has been confined largely to projects where the economic loss due to an error would be so great as to justify employing large numbers of highly trained staff to develop the mathematical specifications and then to check the design and the proofs of the code. The new techniques for the certification of proofs promise to reduce substantially the amount of checking and validation needed.

The second effect of the research concerns a class of computer applications known as NP (non-polynomial)-complete problems. The time taken for a program to solve such a problem increases dramatically depending on its complexity, to the point where it would take longer than the life expectancy of the universe. These are not abstract academic problems, but occur, for example, in the design of telecommunications systems or the layout of computer circuits on silicon chips.

Until recently researchers have believed that although an exact solution to these NP-complete problems has not been possible, approximate ones can be found. An archetypal NP-complete problem might involve devising a layout of roads to connect all the cities in an area using the smallest amount of road material. Computer scientists believed that while an optimal solution would never be possible, there could be approximate solutions that came within 99 per cent of the minimum amount of material.

The research in the US has proved that such programs do not exist for large classes of NP-complete problems - and that such problems will almost certainly be incapable of being computerised. This spin-off alone is being claimed as the major computer science discovery of the past two decades.

The writer is professor of computer science at the Open University.