Watching through the glass, Karl Melder chuckles. "You see what he did? He highlighted the column headings as well as the data. When he sorted them, the headings got sorted in with the addresses. A lot of users make that mistake."
As a specialist tester at Microsoft's usability labs, Melder spends a lot of time observing the mistakes computer users make. He is not interested in testing people's abilities. Rather, he wants to find the quirks in his company's products that confuse people and make the software harder to use.
"When we tested Microsoft Access, we found that people kept getting stuck on printing summaries of reports," he recalls. "They just couldn't see how to do it. Eventually, we tried moving the summary button from the left side of the dialogue to the right side. After that, everybody got it right."
Details like these can be critical to the success of products such as Microsoft's Excel spreadsheet or Access database. There is no point in adding ever-more powerful features to the software if people cannot work out how to use them. Often, a small change in the design can make a huge difference to how the program is perceived.
Situated at Microsoft's headquarters just outside Seattle, the usability lab consists of a pair of rooms separated by one-way glass. A volunteer sits in one room, working at an ordinary PC. In the other room, the tester monitors the user's actions on his own PC, adding time codes and comments where necessary. The users are encouraged to talk aloud as they work. The entire session is recorded on videotape.
Microsoft keeps a database of around 10,000 volunteers: men, women and children from the Seattle area, recruited mainly by word of mouth. When a test is planned, a co-ordinator searches the database for people whose level of computer expertise matches the product's target market. In return for their help, volunteers receive free Microsoft products.
As well as helping to refine existing features, usability also plays a part in planning new products and features. To this end, Microsoft staff often engage in a process called contextual inquiry - a fly-on-the-wall exercise in which testers visit offices and homes, closely watching how people use their PCs.
"We might spend a day at a user's site, sitting behind them, quietly observing what they are doing, not interfering in any way," says Sarah Leary, a product manager in Microsoft's Desktop Applications Division.
The technique has proved far more effective than asking users what they want from their programs. "It's hard to get people to articulate their needs," says Leary. "They tend to focus on individual activities rather than the wider picture. With contextual inquiry, we can step back from the programs and look at the overall goals."
One observation that emerged from the process was the way users often need to gather documents originating from different programs: a sheet of figures from a spreadsheet, a page of text from a wordprocessor, and so on.
This led to the idea of Office Binders, a feature of Microsoft's Office 95 suite that allows users to do just that. It is unlikely that this need would have been identified in any other way.
For another example, Leary tells of a user who was struggling to use Microsoft Word. Eventually, he pointed to the row of buttons near the top of the screen and said: "Gee, I wish I knew what all those are for."
As a result, Microsoft introduced the "tooltip", a tiny caption that pops up when the user rests the mouse on a button for a couple of seconds.
It is only by running tests like these that software vendors can hope to make their programs easy to use. In the past, the strategy was to add every conceivable feature to a program in the hope of keeping ahead of the competition. But vendors have at last realised that the more features they add, the harder it becomes for people to get at them. By deciding to concentrate on usability rather than functionality, they are giving the users a chance to catch up.