The story of VRAM starts in California, in a small town named Repoville, where an electronic telecommunications engineer named Dan Petrowalski was experimenting six years ago, in 2005, with a new idea for a telephone answering device.
Now, as Petrowalski realised, the telephone answering device is not an answering device at all but a tape recorder that expects the caller to do all the work. So why not, thought Petrowalski, invent an answering device that really answered a caller while you were out? A machine you could talk to intelligently? It was possible. But it depended on two things. It depended on being able to devise a machine that could simulate your voice to sound like you, and a machine that could simulate your thought patterns to talk like you.
The first was not too difficult. Nowadays it is perfectly possible to sample any sound and use the elements of it to manufacture a new synthetic version of the old sounds. But talking artificially like someone else is different, and difficult, because it involves reactions that are thought to be particularly human, such as hesitation, reluctance, decisiveness, even contradiction. However, after much experimentation, and with the help of a friend who worked in artificial intelligence, Petrowalski finally devised a computer-controlled voice unit that could understand human speech and respond to it.
This meant that if you phoned a friend called Len who had one of these Virtual Reality Answering Machines, you would hear a familiar voice saying, "Hi, I'm Len. I'm not here right now, but my VRAM will talk to you. Go ahead!" And if you then said, "Hi, Lenny, where are you right now?" the voice would say, "Right now I'm playing golf with the boss, but you can't phone me there because he hates mobile phones ringing while he's playing." "Well, Lenny," you'd say, "I just rang to say that the dinner date you and Anne fixed for me and Susie next Tuesday is impossible. We'll have to reschedule it." "I'm real sorry to hear that," the VRAM would say, "But as soon as I get back from golf or Anne returns from work, we'll get right back to you and fix another date."
Within a year or two this cheerful substitute had been improved to the stage where it wouldn't just hold the fort and make stalling decisions - it would actually make real decisions, and reschedule the dinner before the real person had come home from golf. It would chat, exchange gossip, offer advice, listen sympathetically and so on, in a recognisable human voice. These displays of synthetic sympathy were especially useful to the Samaritans and to public relations firms, who needed to pour on charm and understanding in endless rations, even if people were occasionally sued for failing to fulfil promises made by their VRAMs.
Then something strange happened. It began to be noticed that people preferred to ring the VRAM than the real person. The real person was often ill, or in a bad mood, or indecisive. The VRAM, sounding otherwise identical, was always cheerful and positive.Even when the VRAM made commitments that the real person couldn't keep, it was usually felt that it was the real person who had let the VRAM down.
A national outcry led the US government to phone Mr Petrowalski, who had become a reclusive billionaire, to put pressure on him to soften the character of VRAMs. What the White House didn't know was that Mr Petrowalski had recently died. "Hi there, Mr President," his VRAM answered, "I'm so glad you could call. I am only sorry that I am recently deceased, but what I would have said is this..."
This led to an entirely new debate about whether your VRAM could make valid decisions for you after your death, and what constituted death and immortality. The debate is still unresolved as of 2011.Reuse content