When networkers challenge the superpowers: Computers that stole the show in the Eighties and lead today's market must come to terms with local PC link-ups, says Greg Wilson

Greg Wilson
Monday 11 January 1993 00:02 GMT
Comments

THE STRUGGLE among supercomputer manufacturers for market share has become one of the most intense in the whole computing industry. More vendors than ever crowded into Supercomputing '92, the industry's leading get- together, in Minneapolis.

Almost all the manufacturers, however, were displaying machines built on the same general principle: several hundred microprocessors, connected by tremendously fast communication links enabling them to work together on the same problem.

Supercomputers are now used to design automobiles and drugs, study global warming and model whole economies. As more and more engineering and pharmaceutical companies begin to rely on them, leading vendors such as IBM and Fujitsu have joined the race and increased the pressure on smaller companies. Cray Research Incorporated, for example, still dominates the market with machines containing only a few very powerful processors, but even it is to introduce a massive parallel computer.

But while such computers stole the show during the Eighties, a completely different approach to supercomputing was being explored quietly. The combined power of the personal computers in the average university or office building is greater than that required by most nuclear power stations. Networks for these have become common and reliable, and a number of groups have been exploring how to harness them, overnight or at weekends, as if they were a single supercomputer. Compared with the traditional approach of building one big machine, this technique is less expensive: in the short term, because the PC was probably going to be bought anyway, and in the long term, because individual machines can be upgraded in ones and twos, rather than being replaced wholesale.

For the first time, a group using this approach has won supercomputing's highest honour: the Gordon Bell Prize. A program called EcliPSe - written by Hisao Nakanishi and Vernon Rego of Purdue University in Indiana, and Vaidy Sunderam of Emory University in Georgia - was run simultaneously on 191 different machines, mainly in universities across the United States. It delivered more than 10 times as much prospective output per dollar as a conventional Cray supercomputer. Although EcliPSe did not actually deliver as much absolute capacity as a Cray could have done, Sunderam claims that this was only because the researchers did not try to access more work stations at the same time.

Their impressive results do not, though, mean that traditional supercomputing is dead.

Many types of program, unlike the EcliPSe, simply have the wrong structure to make use of a distributed supercomputer. Instead, the traditional supercomputers and the new work-station networks are likely to become better integrated.

Privately, some vendors in Minneapolis acknowledged that a 'seamless' supercomputer would have been available much sooner if the transputer chip produced by the British firm Inmos in the Eighties had been more commercially successful. But lack of British government support during its critical start- up years and delays in bringing its second-generation chips to the market led many participants at Supercomputing '92 to look upon Inmos as yesterday's company.

Poignant evidence of such an attitude is revealed in a new product from Meiko, a Bristol-based firm whose first machines were built using transputers. Its latest hardware, however, contains American microprocessors, Japanese arithmetic chips and Meiko's own communication chips - but no transputers.

(Photograph omitted)

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in