Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Another holy trinity

Merge Web servers, and random-process chunk everything on a supercomput er

Chris Gulker
Monday 23 November 1998 01:02 GMT
Comments

EVERY SO often a couple of disparate ideas come together. Mind you, my ideas seem to be coming more and more slowly as the world speeds up and I slow down. So I loved it, after last fortnight's despatch about Beowulf supercomputers, when a couple of bits and pieces began clicking and fitting together.

In 1995, I sat in a Silicon Valley cafe and listened to a programmer called Chuck Shotton describe what he called an RAIC (Redundant Array of Inexpensive Computers). Shotton had just written WebStar, Web server software for the Apple Macintosh.

He thought that four or five inexpensive Macs could keep up with a UNIX Web server, and would be far less expensive to buy, set up and maintain. While WebStar went on to be successful, the RAIC concept never took off.

Fast forward to October 1998. I'm at the Advanced Imaging Conference in San Francisco, listening to a computer scientist, Eric Peters, describe a new way of serving streaming video. Peters was struggling to find a way to make video servers more reliable. High-end digital video editing systems offer great advantages in fast-moving operations such as TV news, but there's a problem on the server side. Serving multiple streams of full-motion video challenges even state-of-the-art equipment. One company, Avid, uses an SGI Origin 2000, a supercomputer, to move the data. Disk storage and retrieval speeds are a potential bottleneck, so the latest RAID technology is employed. RAID (Redundant Array of Individual Disks) divvies the data up into successive "stripes" that can be cued up and ready to go in quick sequence, improving "playback" performance. The problem is that disk drives are still mechanical devices subject to failure.

As the drive count goes up in massive RAID arrays, the reliability goes down. It only takes a single drive failure to bring playback to a halt. RAIDS can rebuild data from a crashed drive, but it takes time, and that's no consolation if the crash occurs while you're feeding live video to millions on the Six O'Clock News.

So Peters looked at mathematical Chaos and Complexity Theory. Both fields suggest that random processes are robust and highly reliable. While "random" may not seem useful to a video editor ("I don't want any old video, I want the bit I want!"), it can be. Incoming data are broken up into small chunks, and then randomly stored on an array of servers. Each chunk is then backed up to another server, also randomly chosen. The chunking software keeps a map of where all the pieces go.

The servers are connected by a network - 100 Base T Ethernet in the case cited by Peters. While it sounds like the RAID, there are differences. For one, a server failure doesn't bring things to a halt: the chunking software merely begins to point to the back-up chunks. Unlike the RAID, the random server array gets more robust as servers are added.

The random process insures that data will be very evenly distributed across all the servers. Peters said that, in tests, the machines could easily saturate the network, and that new servers could be added on the fly as extra capacity was needed. Peters' server cluster sounds exactly like the Beowulf supercomputer arrays described by Tom Sterling, a scientist at the California Institute of Technology. These supercomputers, like Los Alamos Labs' Avalon, consist of large arrays of high-end PCs connected by a fast Ethernet network.

Another link! So what happens if you merge all three ideas - clusters of Web servers and random-process chunking all running on a Beowulf-style supercomputer? Possibly, an answer to some of the most vexing problems in computing. Companies are trying to marry huge corporate databases to Web servers in order to do electronic commerce, and finding it's not easy. The systems get so complex that failures are frequent and maintenance is high.

A server cluster could replace a plethora of interconnected systems. A search engine on each server could provide indexing, like a mini Yahoo or Lycos. Indeed, Apple Computer recently started shipping an engine, called Sherlock, in Mac OS 8.5. The data maps could be set to mimic any kind of data structure, particularly if XML is its language, eliminating the need to replace officeloads of proprietary client software and data terminals.

Voila, Supercomputers 2. You read it here first!

cg@gulker.com

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in