Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Science: That was super but this one is ultra: Darrel Ince looks forward to the birth of the million million machine

Darrel Ince
Sunday 16 May 1993 23:02 BST
Comments

A NEW generation of computers that will dwarf our present supercomputers is about to be born. Known as ultracomputers, they have been designed to compute more than 1 million million instructions per second.

This increase in power represents staggering progress over the past four years: in 1988 the most powerful computers were capable of operating at no more than 2,000 million instructions per second. However, the development brings with it hard questions which, if answered wrongly, may cost tens of billions of dollars.

An ultracomputer will consist of thousands of separate processors, loosely coupled by sophisticated switches. Current supercomputer technology has traditionally depended on processors tightly coupled on the same slice of silicon. At today's prices, a loosely coupled network may cost as much as pounds 200m.

One of the leaders of the American supercomputer community, Gordon Bell, has pointed out that such networks are capable of solving only a narrow range of problems. He maintains that in three years' time, conventional technology may be able to deliver ultracomputer performance at a fraction of the present cost: something like pounds 20m.

The increased performance will come from improvements in semiconductor technology, combined with experience in linking large numbers of individual computers.

Increased semiconductor speeds that are expected to emerge over the next three years will mean that cheap workstations - extremely powerful desktop computers - can be coupled to tackle some of the more straightforward computations, the 'low-end applications', that an ultracomputer might run.

However, another complication is afflicting the supercomputing industry: the lack of methods, tools and experience in developing large software systems for massive computers, and a lack of suitable applications. Mr Bell says that much of the software developers' time is wasted trying to match applications to loosely coupled computers that are really not suitable for running the applications.

This poses big problems for the computer industry and supercomputing's customers. It is highly likely that we will see high-end applications, such as weather forecasting, being tackled by substantially cheaper technology, while low- end applications, such as small-scale simulations of stresses in buildings, may be solved by relatively cheap technology employing networked workstations.

Mr Bell's solution - to wait and see - is simple but requires considerable courage. In the meantime, the massive amount of funds that would otherwise be devoted to the hardware of ultracomputers can be used to address the shortage of developers skilled enough to produce software systems for supercomputers and to run collaborative programs in which scientists and engineers attempt to solve problems using current technology. Hopefully, the results that emerge will help in the development of ultracomputer technology.

The United States, which is at the leading edge of hardware technology, seems to be rushing headlong into its High Performance Computing and Communications Program in the hope that it will lead to ultracomputers.

For once, the United Kingdom's relative poverty may be an advantage. It at least allows us to sit back and watch the potential loss of billions of dollars in a hardware technology that could be obsolete within three years.

The author is professor of computer science at the Open University.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in