Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

SCIENCE: FROM SOCRATES TO SILICON CHIP?

Artificial intelligence has evolved so far that Deep Blue can beat Garry Kasparov at chess. So are computers nearly conscious? The neuroscien tist Susan Greenfield argues that it all depends on what 'conscious' means

Susan Greenfield
Sunday 25 February 1996 00:02 GMT
Comments

CONSIDER the potential powers of computers. Alternatively, reflect for a moment on the elusive, indefinable nature of your own consciousness. Are they comparable? Such considerations can, of course, inspire myriad hopes, fears, predictions, strategies and controversies. Part of the problem is to identify a clear and unambiguous question that can inspire a progressive line of argument. One way of focusing the mind is to explore how the two issues of computers and consciousness relate to each other. In fact, when we turn to computers and consciousness, all discussion can be boiled down to just two questions: can a computer be conscious? Does the brain work like a computer?

Answering one will not help with the other. If a computer were conscious, there would be no reason to conclude that our biological brains worked like a non-biological counterpart. Conver-sely, processes in our brains could follow the same mode of operations as a computer, but that would not imply that machines, by sheer virtue of their being computers, were conscious.

Of the two scenarios, that of conscious computers is the easier to imagine, as we need not be troubled by a prior knowledge of how the biological brain might "work" in the first place. We do have to worry, though, about what is meant by "conscious". Although I (and I assume everyone else) have a feeling about what consciousness is, and a firm conviction that at least I am myself conscious, the term eludes definition. You might be moving or speaking, but of course you do not need to be. Conversely, movement and speech can be contrived mechanically in the simplest of toys without any but the youngest child imputing an independent awareness to them. The quintessential feature of your consciousness (and presumably everyone else's) is that it is subjective: everything else is superfluous.

How then will an outsider test you for consciousness? Almost 50 years ago the mathematician Alan Turing devised his hypothetical Turing Test: a computer would be deemed to be conscious when an interviewer, with impartial access to both a machine and a person, could not distinguish between the two. Modified Turing Tests are now run in United States. The modification is to restrict the subject on which the computer/person may be questioned. Even with these changes, which preclude needing to display all the vagaries of broken trains of thought and illogical associations that so characterise human thinking, the computers have still not fooled anyone (though it is a sobering thought that one human was misjudged to be a computer). Moreover, it is hard to see how a one-year-old child, or a dog, both indisputably conscious, would stand a chance of passing.

The Turing Test highlights the problem of operational definitions of consciousness. Your first-person personal world need have very little relation to the outside one. The psychologist Donald Mackay expressed this dissociation very well when he pointed out that an actor spouting Hamlet's lines and behaving as Hamlet, did not have Hamlet's consciousness. He was not actually the tortured prince of Denmark.

Leaving aside, then, the problem that we would never really know in any case if a computer were conscious, is it at all likely? Certainly there are those who are expecting the new Jerusalem of silicon overlords any day now. Marvin Minsky of Massa-chusetts Institute of Technology (MIT) has claimed that artificial systems of the future will think a million times faster than we do and that they should be regarded not as "them" to our "us", but rather as our "mind children", a term coined originally by Hans Moravec of Carnegie Mellon University in Pittsburg. Indeed, Deep Blue, the computer that has succeeded in beating Garry Kasparov at chess, could be seen as one of MIT's "children", being developed for IBM by a team originally from MIT.

In a similar spirit, the Nobel laureate Gerry Edelman has devised a series of "synthetic animals" called "Darwin" that, with increasing intelligence as the series has developed, move around in a confined space learning about their environment and acting accordingly, with no externally imposed agenda. Edelman reckons that before the end of the next century, synthetic successors to this type of device will be conscious.

But what reason have we to go along with this line of thinking? What will be the special extra factor that exists in these new "brains" that was lacking in the previous ones? No one as yet expects a working model, but to be convincing computationalists should at least be able to outline a theoretical scheme rather than expect us to accept it with blind faith. But there's the rub. As yet we have no inkling as to how consciousness, a first-person experience, could arise from a collection of non-conscious elements, be they made of silicon or from the real brain.

As a neuroscientist I know certain events in certain parts of the brain cause the sensation of pain, and others feelings of pleasure. But I have no idea how the one actually leads to the other. How could I therefore even dream about replicating this causal connection in an artefact? What principles would I employ? It really is not helpful to assume, as Minsky does, that as long as the system were sufficiently complex, consciousness would suddenly be spontaneously generated. Even were such a scenario to happen in a silicon system, how would that help us understand the physical basis of consciousness? For both computationalists and neuroscientists the physical basis of consciousness is the final frontier, the most challenging question. But surely the answer will not be reached more rapidly by going one step back and dealing with silicon systems where consciousness in the first place is, to say the very least, more in doubt than in real brains?

A question that is asked very rarely about sentient computers is that even if they did roam the earth, what use would they be? If they had "simpler" brains, they might have missed out on the vital ingredient that might only arise from more complex systems and might only then have made them conscious. Or they might have just "simpler" consciousness: but simpler than what - a rabbit, a flea, a mid-term human foetus? Certainly, if such artefacts were conscious in the same way as we are, we could no more tamper with them than we do with each other to discover more about how consciousness is generated. A consciousness that was indistinguishable from that of a human would surely lay claim, justifiably, to similar rights. In short, of course we can imagine conscious machines, but how would we ever know they were conscious, what principles or strategies are there for constructing such consciousness, and what use would they finally be?

Consider, then, the alternative question: whether the biological brain works like a computer. Here there is potentially much more fertile ground for progress, thanks to the ever-burgeoning knowledge of brain functioning. For the last 30 years neuroscience has flourished within the paradigm of neuronal communication: "synaptic transmission". In brief, a neuron or brain cell generates an electrical signal due to a transient change in the distribution of ions, and hence charge, between the inside and the outside of the cell. This impulse is then propagated to the end of the neuron, whereupon it causes the release of a chemical (a transmitter) which diffuses across the narrow gap between cells (the synapse). Once the transmitter reaches the target neuron on the other side of the synapse, it triggers a change in the distribution of ions and thus, in this second cell, causes the generation of a further electrical signal (known as an "action potential"). During the 1960s and 1970s much was made of the fact that some transmitters triggered the generation of action potentials, "excited" a cell, whereas others suppressed these electrical signals, "inhibited" the cell. Inhibition and excitation then were seen as the building blocks of brain functioning. How seductive it was to draw parallels with a computer, with its on/off switching.

Moreover, some brain processes proved highly tractable to computer modelling. For example the cauliflower-shaped structure at the back of the brain (the cerebellum) plays a large part in the co-ordination of senses and movement needed for sophisticated skills such as driving and playing the piano. It really did look as though the computer was a useful analogy for the brain. Indeed, some parts of the brain do work like a computer, but how do these processes relate to consciousness? The very skills that the cerebellum enables us to perform are performed without conscious awareness. Obviously, when driving we are not unconscious, but we are unconscious of making the decision to press the brake when we see a red light. It is no coincidence that the computational approach for modelling brain functions has worked best for processes such as these, which are "automatic", namely unconscious and machine-like.

In contrast, consider movements that are not "automatically" triggered by an external sensory cue, but spring from the inner world of individual consciousness. This translation of thought into action is the very link that is weakened in Parkinson's disease: the patient wants to move, but cannot. This disease is caused primarily by a lack of a particular transmitter, dopamine, in a certain population of neurons. But other neurons in other parts of the brain also use dopamine and are not affected in Parkinson's disease.

Some such neurons, however, are implicated in the totally different disorder of schizophrenia, but in this case schizophrenia is associated with a functional excess of dopamine. Incidentally, it is because the same transmitter has different roles in different parts of the brain that patients suffer side effects of the drugs used to respectively enhance (L-DOPA) or block (chlorpromazine) the actions of dopamine: Parkinsonian patients can experience schizophrenia-like hallucinations, and schizophrenics can suffer from Parkinsonian-like disturbances of movement.

How could the actions of dopamine be modelled on a computer? It would not be good enough to just have a means of exciting or inhibiting nodes. Other transmitters which can, like dopamine, change the electrical signalling between neurons, would be of no relevance to schizophrenia or Parkinson's disease. We know that dopamine does what it does specifically because it is dopamine. Moreover, the situation is further complicated by the fact that dopamine interacts in a highly selective way with other transmitters like a multi-way see-saw.

Finally, just to make life really tough for anyone attempting to build their own brain, it transpires that the actions of dopamine, as for many transmitters, need not be simply excitatory or inhibitory after all. Instead, they can "modulate" coincidental, pre-existing or potential signals, without themselves having any effect. What we are dealing with here should not be equated with memory: rather, modulatory signals will last from seconds to minutes, and perhaps hours. This phenomenon of neuro-modulation, which is a concept attracting increasing attention from neuroscientists, is a means whereby the brain can vary its responses from one moment to the next.

Interestingly enough, it is these modulatory actions of various transmitters that might well be the target of drugs known to modify mood and hence consciousness. Prozac, morphine, amphetamine and LSD all work in different ways and/or involve different transmitter systems, and result in different types of conscious states. Hence there is obviously a strong chemical- selective element in determining consciousness. It would be hard to see how this chemical selectivity could be preserved in computer models. Admittedly, advanced machines are no longer in thrall to digital on/off operations, and a silicon "retina" and "neuron" have been built with analogue (ie, dimmer switch) properties. Even so, any way the analogue action of dopamine was factored in would also hold for other chemical messengers, so how would one distinguish them qualitatively in an artificial system?

Moreover, as we have just seen in real life, the actions of dopamine are different in different brain regions, so it is not immediately obvious how one would programme in site-specificity, where multi-way chemical balancing acts give each region its own pharmacological signature. It is this chemical-specificity that endows the brain with an extra dimension, which as yet has not, in my view, been satisfactorily addressed by computer modellers. The only way would be to build no less than a real brain. Even were such an exercise plausible, it would not help us understand how non- conscious components collectively generate consciousness, be the brain biological or of silicon.

One more argument posed by computationalists is almost one of default. The only ultimate alternative to a buildable brain, such an argument posits, is to subscribe to the idea of vitalism: that there is some magic spark in living things, referred to some 200 years ago as natura naturans. Since this life force would be irreducible and ultimately therefore incomprehensible, it would clearly not be a satisfactory explanation for anyone pressing for a scientific approach to consciousness. On the other hand, is computation the only other way?

I would agree with artificial brain modellers that it is reasonable to assume that consciousness is the emergent property (where the whole is more than the sum of its parts) of non-conscious elements. Yet, though I don't take the view that living matter is endowed with magical properties, it still generates emergent properties not realistically reproducible in silicon. It is one thing to imagine building a conscious machine, quite another to build it. It is understanding the physical, causal basis of how these emergent properties are generated in the brain that is, in my view, the ultimate challenge for neuroscience.

But I am not demonstrating biologist bias here: I have no inherent distaste for artificial brains. It is just that I cannot accept artificial minds as an article of faith. If a computationalist came up with a realistic strategy, however hypothetical at this stage, such as occurs through the ceaseless unfolding of chemical symphonies in the brain, I would not be looking for a ditch to die in. Artificial neuronal networks can, of course, display an impressive capacity to learn on their own: they achieve feats of problem- solving and speeds of calculation that make us look Neanderthal: they can even exploit light-sensitive protein switches. But when it comes to consciousness, they have not delivered. !

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in