Science: The mind machine
Igor Aleksander has created a real-life successor to Hal. Charles Arthur hears how
How did you do? If you found it hard, perhaps you ought to know that Igor Aleksander has a machine which can do that easily. When he asks it (in words) to produce an image of "banana" that is "blue with red spots", the image swims on to the screen in seconds.
This, says Professor Aleksander, is indicative that the computer has something which scientists and computer engineers have been struggling towards for more than 50 years: machine consciousness. Yes, the same thing that marked out Hal, the computer in 2001: A Space Odyssey, and the robots of Isaac Asimov's science fiction.
At the moment, this machine consciousness can only categorise and imagine things in a limited domain. It knows what two-dimensional images of cats, butterflies, and mice look like. It also knows what things that are red, yellow, blue, green, and indeed blue with red spots look like. Give it an image of something it has never seen before, and it will try to categorise it. Equally, ask it to picture something it has not seen, but has the "language" for - such as a blue cat - and it will.
That might not sound like a lot. But it is actually an essential breakthrough, because, as Professor Aleksander points out, the ability to recognise "redness" - or any other sort of -ness - is something that philosophers have long maintained is the province only of conscious beings. And now he has achieved it on a humble PC.
"Philosophers call it the 'qualia' - the essence, the quality - of a thing," he explains. "A red boat, a red cat, both have 'redness'. They say it can't simply be something in the neurons." Yet he can observe the part of the system which observes colour decide that something is red, or reddish, while other parts haven't decided what the object actually is.
That separation of processing is another key part of consciousness, he thinks. "It's an emergent property of neural centres which interact," he says. (An "emergent property" is behaviour which only becomes apparent when you have sufficiently many individual components acting at the same time. For instance, a hundred neurons gives you nothing; a hundred billion, a human being.)
Though Professor Aleksander has been researching this field of artificial intelligence for 30 years, this breakthrough by his team at Imperial College has only been made in the past six months. The key, he says, lies in creating a set of neural networks complex enough that they can mimic the action of part of the human brain.
Neural networks are computer analogues of the neurons in our brains: they receive inputs from a number of sources, and, depending on what it is "taught" to recognise, produce a certain output. For example, a neuron in your brain or a neural network in a computer whose function is to detect yellow in a scene will "fire" if its input includes the visual representation of a banana, or a sodium streetlight.
By building neural networks up and interlinking them to create more and more complex feedback, you eventually produce a system whose rules are literally unknown. No person has programmed them. All you know is how it reacts.
Professor Aleksander's team has produced the software equivalent of 250,000 neurons with four million connections. The advantage of his machine-based version is speed - "the neurons in our brain only fire about 100 times a second". Using a 200 mhz PC - with the processor "firing" 200 million times a second - leaves headroom for the programs necessary to create artificial neurons. "The speed advantage lets us model things that go on in the brain even though the number of cells is smaller," he says.
The system he has set up is a combination of vision and linguistic representation. The "visual" network (a 64 by 64 grid onscreen) is shown a picture; the "language" network is told that is a cat; the "pattern" network that it is red. After about an hour's tuition, it can recognise all sorts of cats and other objects, in all sorts of colours - and even imagine them in impossible colours.
The discovery, he says, is that the essential element for consciousness is a feedback system between at least two such "modalities". In humans, we have five - at least - modalities. We call them senses.
In building his system, he says, "you end up with a virtual machine which becomes artificially conscious of its virtual world, the one that you expose it to in the machine. But you could easily move that into a robot."
Instead of showing the robot screen images, you could hook up a digital camera to its input. With sufficient education about the "names" of things it was seeing, you would develop a sentient robot. "It will develop a sense of 'self'," Professor Aleksander says. "It can develop an internal representation of its own effect on the world."
One might argue that Professor Aleksander is cheating - that the machine is being given a language, and told what the answers are. But the words used for the objects are more for our convenience, so we can observe the system deciding something is red. The neural network has already determined what that something is; all it needs is a label to hang on it. After all, parents teach children the names of objects in the same way: a child is conscious and has the capability to learn, but needs a common language to communicate.
Does this mean then that language is a prerequisite of consciousness? "An object that has a language system will have greater consciousness than one that doesn't. But it's not a prerequisite. You just need more than one modality."
So what would a machine that was conscious of the outside world, and us, be like? Would we like them? Would they like us? Might conscious machines become cleverer than their makers? "My pocket calculator is cleverer than me - in its particular domain. You'll have robots that are more dextrous, or better able to search Mars than humans. But whether they will solve philosophical problems is another matter ... Maybe I'm being an arrogant human; but I don't know where this leap into greater overall 'smartness' would come from. I think they'll have peculiar characteristics - they'll use language very well, yet have the sentience of a slug."
And what about fears that they might run amok and slay us? "All the science fiction tales give the machine elements which aren't about consciousness, but about being human - such as ambition."
Life & Style blogs
Plus live in a folly tower and Towcester growth
Plus how much you need to earn to rent in London, and new homes figures
Plus where The Apprentices live, house price growth outside London, and househunter numbers
The 10 Best Scotch Whiskies
Casualty in crisis: A&E - a service in meltdown
The myth of the modern dad exposed: New book claims men still won't sacrifice their careers for fatherhood
The experts' guide to summer: From getting fit for the beach to recreating that Olympic buzz
Obsessive compulsive hoarding: A serious health risk in store
- 1 Heading for America? Prepare for the longest US immigration queues ever
- 2 Notes from a small island: Is Sealand an independent 'micronation' or an illegal fortress?
- 3 You thought Ryanair's attendants had it bad? Wait 'til you hear about their pilots
- 4 David Cameron goes to war with newspapers over 'swivel-eyed loons' slur
- 5 It’s official: thanks to Stephen Hawking's Israel boycott, anti-Semitism is no more
BMF is the UK’s biggest and best loved outdoor fitness classes
Find out what The Independent's resident travel expert has to say about one of the most beautiful small cities in the world
Win anything from gadgets to five-star holidays on our competitions and offers page.
£50000 - £58000 per annum + Benefits and Bonus: Progressive Recruitment: SAP F...
£30000 - £40000 per annum + BENS: Progressive Recruitment: Drupal Developer A ...
£45000 - £50000 per annum + bens: Progressive Recruitment: C# WEB DEVELOPER Le...
£240 - £260 per day: Progressive Recruitment: WPF Developer (C#, VB.Net) North...