Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Network: Master of the thinking machines

Igor Aleksander has spent a lifetime designing computers with complex abilities. In this edited extract from the new Radio 4 series Eureka!, he talks to Barbara Myers about the meaning of artificial consciousnes s and whether computers will ever be a threat to man

Barbara Myers
Monday 19 May 1997 23:02 BST
Comments

Barbara Myers: It's the ability of the human brain to think and to remember, to imagine and create, and to know that we're doing these things, that sets us apart from the rest of the animal kingdom and certainly makes man very different from machine. But according to Igor Aleksander, professor of engineering at Imperial College, London, we're not as different as we may like to believe. He has spent a lifetime designing computers with names such as Wizard and Magnus, which have surprisingly complex capabilities. All this work is based on the concept of neural networks, but what are neural networks?

Igor Aleksander: It's artificial neural networks that we're talking about. Real neural networks are the things that make our brains think. We have 10 billion little cells in our brains, which is all that our brains are. Now, artificial neural networks are computing devices which are very different from conventional computers. A conventional computer is just a large filing cabinet in which you store your information, you do a few operations, and a programmer decides everything that's got to be decided. A neural network learns a bit like the brain does. It captures things out of experience and avoids a programmer having to work everything out ahead of time. So that's what turns me on.

BM: How and when were you first turned on to this idea of neural networks?

IA: Very early on in my study of engineering I was fascinated by the brain, and by the fact that it seemed to be such a competent and wonderful piece of equipment that keeps us going for the rest of our lives, and, looking around at the electronics that I was learning about, it all seemed to be far less competent. So that created a tension - what is it that goes on in the brain that makes it so competent, which we haven't quite as engineers got around to in our engineering?

BM: But what, in the first instance, turned you on to engineering, then?

IA: It was the creative aspect of engineering that I really found exciting. Engineering seemed to be something where you had to make something out of nothing, and it's through that process that I realised that by making things you really understand how things work, and then the brain became a target for a complex organism which might be understood by trying to create something out of engineering materials, or out of engineering skills which would emulate it ...

Computers hadn't been invented when I was a student, but while I was working in industry computer design became a very important thing and I actually did my PhD in that area. But, again, I could see that fairly classical design of computers wasn't all that exciting, so I started looking around for other forms of computing and that's where the neural network came in.

Wizard was a hardware neural network which was based on quite a lot of computer technology that was known at that time [late Seventies, early Eighties] and it was probably one of the first pattern-recognisers that used neural net techniques. But in those days you had to keep quiet about that, because neural networks were seen as a lunatic fringe activity. So we called it an "adaptive pattern recogniser". Its application was difficult pattern-recognition tasks. I was once asked by a BBC journalist, "Will this thing do things like recognise faces?" I said no, never. One of my students heard me say that and one of the first things he did was try to get it to recognise faces, and it did, so we recognised we could do things which were difficult with conventional computers.

BM: So it could do what it could do because, unlike the other computer technology, it didn't work in a direct-line logic, it was more intuitive, it was pulling things together from different directions, as the brain does?

IA: It was learning, rather than relying on a programmer to work out everything ahead of time. How do you recognise a face? You measure the distance between the eyes, then you measure the distance between one eye and the nose and so on, and that would take for ever on a computer and it wouldn't work, even with a lot of computer power.

BM: But now, if we move on to the next major development, a machine called Magnus, is that son of Wizard, or is it a different breed?

IA: It's a completely different breed. Wizard would take in an image of a face, say, and someone would say to Wizard by typing it in, "This face's name is Fred" and Wizard would take in the image and produce Fred as its output and it would do this for cups and saucers and anything you cared to show it. But if you asked it, "What does Fred look like?" it wasn't able to answer. So what was missing in Wizard was an internal representation of the world that it was exposed to and learned. Now Magnus has these internal representations. We can now show on a screen what Magnus is "thinking". When it recognises a face, it actually visualises that face.

The simple engineering trick that one does is to interconnect the neurons, the cells to one another, so that they recognise what each other is doing, and the whole consensus of these neurons working together comes up as a known image; that gives Magnus its power.

BM: You refer to neurons - exactly the word we would use if we were talking about brain cells - but in this case those neurons are what, simple processors?

IA: If built in hardware, they are very simple processors. Mostly, nowadays, they are built as a few lines of program, which again brings the programmer in, but the programmer doesn't have to know what these neurons are going to get up to in their lives, and it's the interaction between these little packages of lines of programs which creates a neural net. But it's a bit confusing to think of it that way. The best way to think of it is that there inside a conventional computer, we have created a model of a neural net, and we can treat it as such; we can count up the number of neurons it has, the number of connections that each neuron has, and so it's what computer scientists call a virtual machine.

Think of the brain as a very large handkerchief. If you unfold the brain it forms a large surface. The cortex is a large, flat surface which is all scrunched up into our heads, that would be a surface about 1 metre by 1 metre, and what Magnus represents is something the size of a quarter of a thumbnail, perhaps. So it's a very, very tiny part of what we have in our heads, but the interesting thing about it is that with this tiny part we can see some very interesting things happening, things which we recognise really happen in our heads, like having mental imagery, like being able to attend to one bit of mental imagery as opposed to another, like being able to use something which is akin to a natural language. All of this can happen in a tiny little bit of silicon if you like, or in programming, which represents a very small part of the brain. It makes you stand back and think, my goodness, the brains we have in our heads aren't half wonderful!

I'm serious about artificial consciousness, but the word "artificial" is more important than the word "consciousness". One might say that perhaps one shouldn't use the word consciousness at all, but let's put that to one side for a moment. The Magnus-like devices - through building up this internal world that is a representation of a world which they sense through their senses - have many of the characteristics that we associate with consciousness. These are mainly cognitive characteristics - characteristics of thinking, such as memory, attention, the ability to use language, but also a knowledge of self in the sense that Magnus in its virtual world can explore that world and build up a sense of what it can do in that world and, if required to do some things, a knowledge of whether it's able to do this or not.

When I read 17th-century discussions about consciousness, this is where they started. How is it that we, with whatever we have in our heads, are capable of becoming conscious of a whole lot of different things in our world? Reading that made me feel that if I don't call this "artificial consciousness" we'll be missing a real trick, because we may be able to get much more of a handle on what we call consciousness through these artificial networks than we do by philosophising about it.

BM: So it's a way into the notion of human consciousness to consider that what the machine has is artificial consciousness?

IA: Yes, indeed, and it's the creation of the distinction between artificial consciousness and human consciousness which is interesting, because I then have to ask the question, "What are the things the two have in common, and in what way are they different?" And I think that makes me very comfortable with the idea of an artificial consciousness which could have a natural language conversation with one of its users but not have the sentience of a slug. So one can create in this artificial world objects that can be studied in order to give us more of a knowledge of what we, in a very woolly way, call consciousness.

BM: Your detractors balk at the idea of consciousness in a machine, possibly because it's akin to making a machine live, making an inanimate object animate, and that's very dangerous territory, isn't it?

IA: Yes, artificial consciousness may be the oxymoron of all time, but I think that it helps us to distinguish between consciousness and living: the two are not necessarily synonymous. Of course, we have a lot of cultural baggage which says that they are, and even more of it says that consciousness can become confused with soul, although very few people would make that mistake these days, and there is a lot of sensible scientific activity which tries to relate the firing of the neurons we have in our heads to the conscious experience that human beings have.

Now, the conscious experience and the consciousness of a living object will always be related to that object's biological life. The consciousness that my objects have will not be related to any form of biological life, but will be related to conversations they may have with humans, and they will be entirely honest about being artificially conscious. But it's the commonalities between the two that are vital, for one very important reason. To someone who's suffering a distortion of consciousness through some deficit in the brain, consciousness isn't a philosophical discussion. Consciousness is real; a distortion of consciousness in Alzheimer's patients is real, and it has to do with the biochemical processes that have gone wrong.

Now, neurobiologists have a certain approach to studying these things which does not actually encompass some of the principles of engineering that we can bring into that. So it's working with neurobiologists, as with our engineering skills, that we can bring that together with neurobiological skills, and one gets a better view of what's called consciousness.

BM: Is there a danger of pushing this too far down the road, of making these machines so conscious, so like us, so good at what we reckon to be good at, that they're going to take over?

IA: I feel there's no danger of this at all. If you think of the thumbnail idea, or even if you think that with the passage of time people will build bigger and better computers and you'll be able to stretch this thumbnail to a handkerchief, even then the whole business of living and real- life experience will be missing in these machines. Why should we be building them at all? Mainly because the world is looking out for machinery which gets on better with people ...

So there is an engineering need to make machines that are more competent in that way, but something tells me they're not going to ... take over the world. This again is a sort of confusion - that consciousness leads to playing power games, perhaps, or having needs which are expansionist - but it's people who do that, not machines. I think we can have conscious machines that don't have such needs and still are very interesting to work withn

Eureka! airs on Radio 4 tomorrow at 9pm.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in