THE FUTURE OF self

The Cultural Revolution: WEEK 2: Private life: love, sex, life- style, identity; MIGHT THE INTERNET, DRIVEN BY THE COLLECTIVE INTELLIGENCE OF MILLIONS, ULTIMATELY BECOME A MORE EFFECTIVE THINKING MACHINE THAN THE HUMAN BRAIN? MARTIN REDFERN PONDERS THE UNTRAWBACKS
Click to follow
The Independent Culture
In April, a major conference took place in Tucson, Arizona. It was a significant contribution to the information revolution for two reasons: as with a growing number of conferences, its proceedings were circulated to a wider audience by means of the World Wide Web; secondly, its subject was consciousness research. This is an area where one of the basic pillars of Western thought - the division of the world by Galileo and Descartes into the objective world of things and the subjective world of experience - is showing signs of toppling. And the instrument of change is the computer.

Scientific theories of the world are essentially models - maps of reality rather than reality itself. For some time now, the most advanced modelling has been done by computers, and it is here that the subjective and objective worlds are coming together.

Many philosophers and scientists are using computers to help them understand the nature of consciousness; a few are probing the physical brain and finding that its various functions appear to work in a computational way. For some, this raises the intriguing possibility that, if brains are like computers, computers could become conscious - while linking them into global, unregulated networks like the World Wide Web might even lead to a higher form of consciousness.

Opinion is divided. There are those who insist that, however much we know about the brain's workings, we will never explain subjective experience. Some claim that consciousness is inseparable from a biological brain, and that the human brain can solve problems inherently insoluble by computer. The "nothing butters" (ie, "My mind is nothing but a bag of nerves"), believe all aspects of consciousness are explicable in terms of neuroscience. And then there is the school of thought epitomised by the American philosopher Daniel Dennett. He describes consciousness as the software run by the brain computer; there is no reason, he argues, why machines cannot be conscious, because we ourselves are machines.

Could Dennett be right? The simple answer is that it may never be possible to tell if a machine is really conscious, any more than it is possible to tell for certain that any person other than oneself is conscious (rather than merely behaving as if they were). The most succsssful experimental attempts to mimic the activity of brains have used so-called neural networks, in which connections between silicon elements analogous to nerve cells strengthen or weaken as the network "learns". Professor Igor Aleksander, of London's Imperial College, leads a team of computer scientists researching such networks. He insists that the human mind could, in principle, be simulated on a computer - that a machine might "feel", albeit in a machine- like way. He says, for example, that the engineering exists for a computer to feel "fear" about threats to its stored memories. Aleksander thinks neural computers will one day rise to that challenge, and also to that of self-awareness.

He might be correct. In 1993, researchers at McDonnell Douglas in the USA tried to "kill" a neural network, switching it off progressively by breaking connections at random. Rather than giving out nonsense, it seemed to "fantasise", reliving experiences it had learnt and, ever more slowly, giving out results which seemed to make sense. This seemed curiously similar to the phenomena reported by humans who have had "near-death" experiences.

But the psychology of computers is nowhere near as complex as that of their users. Computers are transforming human-to-human contact. The result - as Christa Worthington writes elsewhere on these pages - is a new kind of relationship. People are using the Internet to find things they can't in real life; to act out fantasies in once unthinkable ways. As computers grow closer to developing "selfhood" of their own, in other words, they are eroding the boundaries - the limitations of physical reality - by which traditional notions of self have been defined. Where will the erosion end?

So far, individual neural networks have no more brain power than a flatworm; they are, however, vastly more efficient at handling information. Link them all together and you have, potentially, the sum of all knowledge. Add to that computer network a degree of artificial intelligence so that it can use knowledge selectively, and you have something very powerful indeed - and surely a long way up the evolutionary scale from a flatworm.

Certainly the precedent among life-forms suggests that when single units come together to form groups the results can be awesome. As our single- celled amoeboid ancestors began to communicate and cooperate, they found they could be more successful if some specialised in one function and others in another. Eventually they became part of a new entity - a multi- cellular organism. The rest, as they say, is evolution. Some would argue that, with flatworm computers, we are embarking on a process of similar magnitude; that by establishing a communications network for society, we are moving towards a super-organism, in which humans will be mere cells, or - who knows? - may even cease to be involved at all.

NEXT WEEK

Public life - the future of work and politics

Comments