Science: The mind machine
Igor Aleksander has created a real-life successor to Hal. Charles Arthur hears how
Monday 23 March 1998
How did you do? If you found it hard, perhaps you ought to know that Igor Aleksander has a machine which can do that easily. When he asks it (in words) to produce an image of "banana" that is "blue with red spots", the image swims on to the screen in seconds.
This, says Professor Aleksander, is indicative that the computer has something which scientists and computer engineers have been struggling towards for more than 50 years: machine consciousness. Yes, the same thing that marked out Hal, the computer in 2001: A Space Odyssey, and the robots of Isaac Asimov's science fiction.
At the moment, this machine consciousness can only categorise and imagine things in a limited domain. It knows what two-dimensional images of cats, butterflies, and mice look like. It also knows what things that are red, yellow, blue, green, and indeed blue with red spots look like. Give it an image of something it has never seen before, and it will try to categorise it. Equally, ask it to picture something it has not seen, but has the "language" for - such as a blue cat - and it will.
That might not sound like a lot. But it is actually an essential breakthrough, because, as Professor Aleksander points out, the ability to recognise "redness" - or any other sort of -ness - is something that philosophers have long maintained is the province only of conscious beings. And now he has achieved it on a humble PC.
"Philosophers call it the 'qualia' - the essence, the quality - of a thing," he explains. "A red boat, a red cat, both have 'redness'. They say it can't simply be something in the neurons." Yet he can observe the part of the system which observes colour decide that something is red, or reddish, while other parts haven't decided what the object actually is.
That separation of processing is another key part of consciousness, he thinks. "It's an emergent property of neural centres which interact," he says. (An "emergent property" is behaviour which only becomes apparent when you have sufficiently many individual components acting at the same time. For instance, a hundred neurons gives you nothing; a hundred billion, a human being.)
Though Professor Aleksander has been researching this field of artificial intelligence for 30 years, this breakthrough by his team at Imperial College has only been made in the past six months. The key, he says, lies in creating a set of neural networks complex enough that they can mimic the action of part of the human brain.
Neural networks are computer analogues of the neurons in our brains: they receive inputs from a number of sources, and, depending on what it is "taught" to recognise, produce a certain output. For example, a neuron in your brain or a neural network in a computer whose function is to detect yellow in a scene will "fire" if its input includes the visual representation of a banana, or a sodium streetlight.
By building neural networks up and interlinking them to create more and more complex feedback, you eventually produce a system whose rules are literally unknown. No person has programmed them. All you know is how it reacts.
Professor Aleksander's team has produced the software equivalent of 250,000 neurons with four million connections. The advantage of his machine-based version is speed - "the neurons in our brain only fire about 100 times a second". Using a 200 mhz PC - with the processor "firing" 200 million times a second - leaves headroom for the programs necessary to create artificial neurons. "The speed advantage lets us model things that go on in the brain even though the number of cells is smaller," he says.
The system he has set up is a combination of vision and linguistic representation. The "visual" network (a 64 by 64 grid onscreen) is shown a picture; the "language" network is told that is a cat; the "pattern" network that it is red. After about an hour's tuition, it can recognise all sorts of cats and other objects, in all sorts of colours - and even imagine them in impossible colours.
The discovery, he says, is that the essential element for consciousness is a feedback system between at least two such "modalities". In humans, we have five - at least - modalities. We call them senses.
In building his system, he says, "you end up with a virtual machine which becomes artificially conscious of its virtual world, the one that you expose it to in the machine. But you could easily move that into a robot."
Instead of showing the robot screen images, you could hook up a digital camera to its input. With sufficient education about the "names" of things it was seeing, you would develop a sentient robot. "It will develop a sense of 'self'," Professor Aleksander says. "It can develop an internal representation of its own effect on the world."
One might argue that Professor Aleksander is cheating - that the machine is being given a language, and told what the answers are. But the words used for the objects are more for our convenience, so we can observe the system deciding something is red. The neural network has already determined what that something is; all it needs is a label to hang on it. After all, parents teach children the names of objects in the same way: a child is conscious and has the capability to learn, but needs a common language to communicate.
Does this mean then that language is a prerequisite of consciousness? "An object that has a language system will have greater consciousness than one that doesn't. But it's not a prerequisite. You just need more than one modality."
So what would a machine that was conscious of the outside world, and us, be like? Would we like them? Would they like us? Might conscious machines become cleverer than their makers? "My pocket calculator is cleverer than me - in its particular domain. You'll have robots that are more dextrous, or better able to search Mars than humans. But whether they will solve philosophical problems is another matter ... Maybe I'm being an arrogant human; but I don't know where this leap into greater overall 'smartness' would come from. I think they'll have peculiar characteristics - they'll use language very well, yet have the sentience of a slug."
And what about fears that they might run amok and slay us? "All the science fiction tales give the machine elements which aren't about consciousness, but about being human - such as ambition."
Life & Style blogs
Exclusive: We share blame for creating 'jihad generation', says Muslim strategist
Robin Williams Emmys tribute led by Billy Crystal criticised for including 'racist' joke about Muslim woman
The Rotherham child abuse scandal is a tale of apologists, misogyny and double standards
Scottish independence TV debate: Pumped-up Alex Salmond bounces back in bruising second round against Alistair Darling
Do you realise just how foolish the UK looks?
Arizona shooting: Gun instructor accidentally killed by nine-year-old girl with Uzi
- 1 Arizona shooting: Gun instructor accidentally killed by nine-year-old girl with Uzi
- 2 Miley Cyrus' homeless MTV VMAs date, Jesse Helt, is wanted by the police
- 3 Paul Scholes: Manchester City were so good against Liverpool I felt like turning the television off
- 4 Notting Hill Carnival: Woman shares selfie after being ‘punched in face for telling man to stop groping her’
- 5 Pamela Anderson rejects ice bucket challenge because of ALS experiments on animals: 'Mice had holes drilled into their skulls'
- < Previous
- Next >
£100 - £120 per day: Randstad Education Hull: Our Primary School in Grimsby ar...
£28000 per annum: Ashdown Group: IT Software Application Support Analyst - Imm...
£100 - £120 per day: Randstad Education Hull: Year 6 Supply Teacher Position a...
£85 - £120 per day: Randstad Education Preston: Randstad Education is urgently...