Feelings, computer developers have long agreed, have no place in the quest for artificial intelligence; who wants their PC to offer sympathy when they're gloomy, or reassurance when they're worried? Isn't Microsoft Word's irritatingly upbeat troupe of Office Assistants - cartoon "helpers" designed to sense when you're in need of advice and obligingly offer it - bad enough?
Perhaps. But Rosalind Picard has a different vision - of emotionally literate computers, sensitive to their users' enthusiasms and frustrations, and able to adapt their behaviour accordingly; of a world in which, when you lose patience with unco-operative software and prepare to hurl the manual at the monitor, your PC will taste digital fear. It sounds like the stuff of paperback sci-fi; but then so do a lot of the projects that emerge from the Massachusetts Institute of Technology's Media Lab in Cambridge, outside Boston, where Picard is professor of computers and communications, and head of the affective computing research group.
Since its inception in 1995, the group has cooked up some promising prototypes, including a pair of spectacles that generate an on-screen read-out of the wearer's mood based on facial expressions, and a mouse that uses a finger-pressure sensor to estimate the user's happiness. Data from both could be used by emotionally sensitive software to tailor an application to personal tastes.
The ultimate goal, though, is a new generation of computers adept at recognising and simulating emotions, even, perhaps - and here vast philosophical questions remain unanswered - computers that are capable of feeling: genuinely emotional machines.
Picard's hunch that emotions - "affect" is her preferred, unemotional term - may be the missing ingredient in artificial intelligence (AI) was inspired by advances in psychology which suggested that feelings, far from getting in the way of successful problem-solving, are essential aspects of human intelligence. Seemingly rational patients suffering from frontal- lobe brain damage prove to be disastrous decision-makers, responding to simple tasks such as scheduling an appointment by cycling endlessly through an infinity of possible alternatives, unable to use "gut feeling" to reach a conclusion, and untouched by feelings of embarrassment as others grow impatient and incredulous at their behaviour. In short, they behave a lot like today's "intelligent" computers.
"I realised there was something really missing in AI - that we'd been completely ignoring this part of the brain," Picard recalled, when I visited her recently. "As we tried to build machines that have the abilities humans have, we were winding up with machines that malfunctioned very much as humans malfunction when their emotions aren't hooked up."
The crucial first challenge is to build computers that can recognise their users' emotional states. Here, Picard has harnessed the potential of another Media Lab creation, the WearCam, pioneered by Steve Mann, now at the University of Toronto. A head-mounted camera that records everything the wearer sees - Picard used it to broadcast her field of vision to the Web as she walked to her car at night - the WearCam offers an ideal test- bed for affective computing. Constant physical contact with its user allows the collection of emotion-related data such as heart rate, muscle tension, skin conductivity and body temperature.
"You want the camera to roll all the time, but you don't want to have all the data it collects," says Picard. "But if you're really enjoying an experience, having to hit the `record' button interrupts that - such as if your child does something wonderful and you run to fetch the camera: by the time you have the camera, the child's not doing it any more." But an affective WearCam, she explains, "would save only the things that it could sense you were really interested in... because it's right when something gets my attention that I can't press the button."
Other Media Lab researchers are developing "virtual pets" with something approximating basic emotional capabilities. Don't talk to them about Tamagotchis: even the Furby - the endearing (but unendearingly priced) interactive creature for which US children are pestering their parents this Christmas, causing two injuries in an Illinois toyshop stampede last month - is yesterday's technology. The new race of computer-driven companions, including the Yamaha Puppy, nicknamed Yuppy, and another MIT cyber-canine, Silas T Dog, are programmed to label their varying internal states with emotional terms ("feel good", "feel bad", and the like) and respond accordingly.
But do they "have" emotions? "Yuppy doesn't feel in the sense that I can imagine a virtual creature ultimately feeling, as close to human feelings as we could get," Picard concedes. "But he does have motivations and learning behaviour driven by his emotional state." A limited degree of emotion, but one that today's PCs would surely envy - except that if they were capable of envy, they'd have nothing to be envious about.
Inevitably, Picard's work, even in the more down-to-earth field of emotion recognition, raises some disturbing ethical questions: how comfortable would you feel using a computer connected to the Web that could read its user's feelings? What could an unscrupulous government do with the information? What wouldn't the average telemarketer give to know when customers were feeling receptive, and in the mood to buy double-glazing?
She acknowledges a popular fear of the increasing aptitudes of computers, and recalls introducing one of the Media Lab's staff to her affect recognition software: "She turned her back on the machine and said to me, in a hushed voice and with a horrified look on her face: `Does it know I don't like it?'"
It's up to computer designers, Picard argues, to ensure that such fears are never realised: "In everything we've designed, you could rip off the sensors, or I could tell you how to make it not work. We need to protect people - if they don't want to give the information, they should be able to disable the system so it can't get it.
"People think that a computer like this could tell everything about them, but it cannot do that. It can measure external signals, but it can't read your thoughts." We already relate to our computers as if they were alive, she says; equipping them with emotional sensitivity would just make that relationship more satisfying.
The notion of machines with emotions raises a plethora of further dilemmas - including the mind-boggling concept of "computer rights", an issue Picard treats seriously in her book Affective Computing (1997): "Giving computers emotions is likely to add heat to the fires of any future activists who might favour machine liberty."
But even Picard draws the line at the idea of a computer with a soul. "I'm troubled by the presupposition that we can reduce everything about humans down to some little mechanisms. It's a very arrogant presupposition, and as scientists we have to be open to the possibility that there's something more to us."
MIT Media Lab: