Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

The Millennium Brain

The psychiatrist Edward Bullmore explains why he agrees with the late Ted Hughes that the brain is the best symbol of the end of the millennium

Edward Bullmore
Sunday 17 January 1999 00:02 GMT
Comments

TED HUGHES wrote an article nearly a year ago, suggesting that one way of making the Millennium Dome "the most astonishing building on earth" would be to build it as "a giant model of the human brain". I don't suppose this seemed like a very good idea to those on the ground at Greenwich, labouring to engineer a vast smooth dome without collapsing the Blackwall Tunnel or creating a vaulted micro- climate of tropical humidity. The architectural problems entailed in trying to build an enormous structure that attempted to represent the immense complexity of the human brain must be currently beyond us. But even if it could be done, why would we want to do it? When was it that the brain became a suitable millennial symbol?

For most of the past 2000 years, the brain has enjoyed a popular status not much loftier than the liver or the heart or any other kind of offal. Even 50 years ago, it might have seemed a bit peculiar for the Poet Laureate of the day to play with the idea of building a brain rather than, say, a Festival Hall. But, quite suddenly, people have started to market the brain. Now it's selling better than ever before: there are more books about the brain selling more copies to general readers; more television and radio programmes about the brain are being made. There are brain images on T-shirts and coffee cups and advertising hoardings. No fashionable home is without a phrenology head on the mantelpiece. This mass market success could be dismissed as a fad, but in fact it is built on a revolution in the scientific understanding of the brain which began about 30 years ago and shows every sign of gathering momentum as we roll into the next century.

The revolutionary force is cognitive neuroscience, which means simply the science of neural or nervous systems applied to the phenomena of cognition or thought. The revolutionary programme, in a nutshell, is to discover how the brain thinks. Like most scientific revolutions, cognitive neuroscience involves looking at the world in a new way. The Copernican revolution in astronomy, which put the sun at the centre of what we've since called the solar system, became unavoidable when Galileo pointed his new- fangled telescope at the sky. The equivalent has happened in cognitive neuroscience with the development of new machines for looking at the human brain. These machines, such as the magnetic resonance scanner we use for neuroimaging research at the Institute of Psychiatry in London, can reveal the structure of a living brain to the nearest millimetre. Even more remarkably, this can be done without causing any pain or danger to the person under study.

To appreciate fully the impact of being able to see inside someone's head, one has to remember that the skull, assisted by the traditional taboo against dissection of the body, has historically done a great job of keeping the brain secret. We were already three-quarters of the way to the second millennium when Leonardo da Vinci and Christopher Wren produced the first recognisable illustrations of brains (their models were those of executed convicts). These showed that the outer surface or cortex of the brain was symmetrical and deeply wrinkled, like a walnut, and that inside the brain there were fluid-filled cavities or ventricles. But it wasn't at all clear how this anatomy functioned before death. Leonardo guessed that thinking might have something to do with the circulation of fluid through the ventricles. Famously, Descartes pointed to the pineal gland as the body's conduit to the mind, on the grounds that it was the only bit of the brain that was not dually represented in both cerebral hemispheres.

By the middle of the 19th century, some of the first neurologists, like Paul Broca, had begun to make more enduring progress by dissecting the brains of patients who had died after a stroke or another illness which caused paralysis or loss of speech. They reasoned that the part of the brain that normally controlled whatever function had been lost would be obviously damaged at post- mortem. Thus it was discovered that the thinking part of the brain was not the ventricles, nor the pineal gland, but the cortex; and that different parts of the cortex were responsible for different mental functions.

This did not alter the fact that the subjects of human brain research had to be dead. Even the discovery of X-rays at the start of the 20th century did not really unveil the living human brain. X-rays shoot through grey and white matter like a laser through cloud. So it is only in the past 20 years that we have been able to look directly at the structure and function of the human brain, and only in the past 10 years that we have been able to do so without exposing people to the risks of radiation.

The images produced by the new machines are, like all digital images, infinitely mutable by computers. The bland natural palette of grey and white matter can be replaced by vivid pseudocolour; the brain can be zoomed, warped or rotated in 3D. It may seem hazardous that a set of scientific observations can take so many forms - what has become of the facts? - yet entirely new perspectives are allowed. For example, something approximating an "average brain" can now be created by mapping a number of living brains and morphing them into a single image. The new technology also makes the brain look interesting, as never before, to many people outside neuroscience. Data on the cortical location of a certain function, which might seem forbidding in the form of a set of numbers or graphs, becomes immediately meaningful when rendered as a coloured focus of activity on a finely detailed anatomical background.

These brain maps show us that the cortex can indeed be subdivided into areas specialising in different kinds of information processing. There are, for example, specialist areas for seeing colour, movement, faces. There is also intriguing evidence for specialisation in more refined aspects of visual processing - such as recognising facial expressions of fear or disgust or discerning a walking human figure in a pattern of moving dots - which might have been advantageous earlier in the course of our evolution. Maps of basic visual functions like these have confirmed the results of pioneering research conducted by recording single nerve cells in the brains of animals. But brain mapping can also be used to examine higher functions (such as language or will or memory) which are difficult to investigate in any animal other than a human. One generalisation to be drawn is that the higher the function in question, the less likely it is to be located in a single specialised area of cortex. A complex function like language, for example, is mapped not to one area of the brain but to at least half a dozen interconnected areas. Different functions can overlap in a single area: short-term memory, for example, can be mapped to a network which includes some regions of cortex that are also part of the language network.

One of the big questions for cognitive neuroscience is how best to comprehend the networked organisation of the brain. I have already described some mental functions as higher than others, casually implying that regions within a network might be organised hierarchically. By analogy to some computers, we could imagine that a given cortical region is specialised to process some input from a subordinate region, before passing it on to another region higher up in the hierarchy for further processing, and so on until ... what, exactly? At what point in this series of processing do we become aware of it? Is there a single brain region at the very pinnacle of the hierarchy which says to itself, "Oh, I see" as bytes of processed visual data are delivered to it? The short answer is no. Much more likely is that brain networks are organised for parallel- rather than serial-processing, and that the "highest" functions emerge as a corollary of the integrated activity of an entire network.

Figuring out how the mind mysteriously emerges from cortical networks for parallel- processing will demand much more than a few brain-imaging machines. It will require the combined expertise of scientists trained in a wide variety of disciplines that are involved in cognitive neuroscience. And this isn't the only big question that cognitive neuroscience seeks to answer. We need to know how the neural apparatus for thinking evolved - like any other bodily structure - by natural selection; and how these naturally selected genes control the extraordinary process by which a cluster of primitive cells develops into a uniquely complicated adult brain.

Some 19th-century ideas which had been all but abandoned in the first half of this century have been recognised anew as being visionary; much of what was considered avant-garde before the Second World War now seems reactionary or simplistic. A surprising winner in the new order is Frederick Gall, the inventor of phrenology. He is now hailed as the first prophet of the basic principle of cortical specialisation - the idea that different areas of the surface of the brain are specialised for different mental functions. Gall may have been preoccupied with functions such as high-mindedness and religious scrupulousness which are all but forgotten by contemporary psychologists, and his experimental method of locating areas of function - by feeling for lumps on the head - is considered as ridiculous as it was 200 years ago. His search for the site of sexual desire, for example, involved feeling for hot spots on the heads of widowed (ergo frustrated) young women. But Gall has been vindicated on principle, and the porcelain phrenology head, quaintly dotted about with antiquated states of mind, is honoured as the prototype human brain map.

Carl Wernicke, a 19th-century neurologist, was one of the first to argue that "higher" functions such as language were determined by networks of cortical regions. In retrospect, this seems like a major breakthrough, but Wernicke's idea was quickly damned for lack of evidence. Wernicke had actually examined dozens of brains from deceased patients who in life had suffered language problems; but his case histories were cursory, his post-mortem examinations crude and biased by his expectations. Above all, there was virtually no corroborating evidence from any other area of brain science.

Sigmund Freud's last major work as a neurologist attacked Wernicke's ideas about cortical networks, although in fact Freud had already begun to have his first thoughts about the unconscious in terms of energy flowing through a network of connected nerve cells. After the First World War, antipathy towards Wernicke and his mainly German colleagues intensified, especially in the English-speaking world, and the concept of cortical networks was thoroughly disparaged. All this, the history writers can now claim, was an indication that the long night before the dawn of cognitive neuroscience had begun.

For the next 40 years, from 1920 to the end of the Fifties, the dominant schools of thought in psychology were psychoanalysis and behaviourism. These had nothing in common apart from a desire to make sense of the mind without worrying too much about the brain.

The language of psychoanalysis was still peppered with words such as libido, instinct, neurosis, which had originally signified something about the body or the brain. But as the ageing Freud relinquished his hope of founding a scientific psychology, rooted in what he knew of the brain, these words were used ever more metaphorically. Followers of Freud adopted the master's language, and invented much more, to construct a model of the mind that was entire in itself. Any sceptical questions about where or how the superego or the death instinct might actually be located in the brain could be turned against the questioner as proof of her resistance or his Oedipal hostility.

Behaviourists like Ivan Pavlov insisted that all we could know of the mind could be seen in the form of behaviour. We can't see that a dog is hungry, but we can see that it eats when presented with a bowl of food. We might wish to believe that it eats the food to satisfy an appetitive instinct; but this is simply jargon. Psychoanalysts might suppose that even the mind of a newborn child was already densely inhabited by instincts and archetypes. The mind of the Pavlovian baby was empty. It knew nothing at birth, and had to learn by applying a few simple rules to the maelstrom of its early experience. It was obviously necessary that there should be a brain in order for learning to take place, and the brain must at least know innately the rules for learning. But the brain of the Pavlovian baby was otherwise as unorganised as its mind was blank; and the development of adult skills like language was not critically dependent on a few key areas of cortex but rather on the "mass action" of the entire brain.

Neither of these two contradictory schools of thought has survived with vigour the advent of cognitive neuroscience, but perhaps the more obvious loser is behaviourism. It now seems incredible that the brain of a newborn baby is as unorganised as its mind is supposedly empty; or that the connected apparatus for thinking which is so consistently visualised in one brain after another still allows for the belief that an individual learns everything by the accidents of his or her upbringing. Accordingly, "instinct" has been retrieved from the grasp of the psychoanalysts and restored to something like its original meaning: that of an innate (and at least partly genetic) predisposition.

But what effect does the revolution in cognitive neuroscience have on our vision of the future? Once the revolution is over, we may imagine that there will be no further call for disembodied old soldiers like "self" or "soul". The circuit diagrams for talking, laughing, dreaming and lusting will have been worked out in detail. Impulses and ideas will be seen merely as changes in the neurophysiological weather. Criminality, addiction and mental illness will be diagnosed and treated more incisively. It could be a brave neuro- world indeed.

Versions of this have been common currency in science fiction for some decades. But science fiction is limited by the science of its times - witness the rivets on Buck Rogers rockets. Past fictional attempts to imagine a future world where the human mind is understood, and even controlled, in terms of the brain, have been correspondingly overpopulated by humans reduced to robots, reprogrammable for good or evil by dextrous application of electrical probes. To my mind (if it still makes sense to use that phrase, and I think it does), recent progress in cognitive neuroscience points towards a rather less totalitarian future.

It seems reasonable to hope that there will be major medical benefits, particularly in the shedding of light on poorly understood psychiatric and neurological diseases. An emphasis on evolutionary and genetic causes of brain organisation is likely to dominate our sense of how we became what we are. I expect us to have a much better grasp of the neural mechanics of emotion, attention, language and memory. I think there will be some astonishingly realistic computer models of human intelligence. But I also expect that there will be a limit to what neuroscience can tell us about the experience of cognition as we know it most intimately. Nobody has demonstrated a mind-reading machine that can infer what somebody is thinking about from the pattern of their brain waves. Indeed, it is an un- resolved question whether we can assign a subjective content to some objectively observed brain data. And, given that we already know that the dynamics of brain networks are inherently unpredictable, the idea that we could make people think certain thoughts, or reprogramme their mental trajectories, by any non-destructive intervention seems even more far-fetched ...

So perhaps Ted Hughes was being more serious than I thought when I first read his article. The brain as we now know it would make a great millennial symbol. It is universally relevant and fascinating. It is the focus of an international scientific revolution which resonates far beyond science, and in which this country has a leading role. The badges and posters would sell like hot cakes. Maybe by the next millennium we'll be smart enough to take Hughes literally, to make a completely realistic, all-encompassing model of the growing, thinking, human brain, or as he put it: "the palace of the greatest Genie in the Universe, the Human Spirit!" Or maybe we'll just be smart enough to realise that even the most beautiful and intricate map of the palace can never tell us all we might want to know about the Genie.

! Dr Edward Bullmore is a research fellow at the Institute of Psychiatry in London

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in