It is claimed that we live in a high-risk society and that science and technology must bear a great deal of the responsibility. There is no doubt that the public is anxious about a large number of apparent risks. These range from nuclear power and environmental pollution to genetically engineered foods. I always blame the media for many of our misplaced fears. Exceptional events are given excessive prominence in comparison to common ones. Daily deaths on the road go unnoticed while dog bites child or man bites dog make the headlines. The media is entirely responsible for public refusal to have anything to do with irradiated food, even though it could bring benefits and there is no evidence that it is dangerous. Anything attached to the words radiation or nuclear has become taboo.
Those from the social sciences who comment on public perceptions of risk rightly point out that the public's perceptions are often not the same as those of scientists. There is also often an explicit or implicit criticism of science for having created these risks in the first place. Moreover, science and scientists are not really to be trusted. But here lies a certain paradox, for it is only because of science and scientists that we know about the risks at all. So if one trusts scientists when they report risks, why should one be so suspicious when they estimate how serious the risk may be?
The key issue is one of trust. Should one trust scientists who work for institutions with a vested interest in denying the risk, like tobacco companies? It matters who the paymasters are. No wonder the public puts greater trust in scientists who work for environmental pressure groups. Even so, surveys show that compared to the trust people have in politicians, scientists do very well. And rightly so considering the recent BSE and E coli events.
All this makes the new document prepared by the Chief Scientific Adviser, Sir Robert May, particularly welcome. Recognizing the significance of science in policy-making, he has put down guidelines for the Government. They emphasise the importance of integrity in collecting evidence, and openness in explaining how scientific advice has been obtained and interpreted.
But back to our doctor. Let us take at random 1,000 people, then we expect 10 to have the disease. Of these 10 only eight will be detected by the test. Tests on the other 990 will give 100 false positives. Thus only eight out of 108 with a positive test will have the disease, that is less than eight per cent! The patient, contrary to what most doctors think, should be quite reassured.
If you got it right, risk patting yourself on the back but be careful not to pull a muscle - life is risky.
Lewis Wolpert is Professor of Biology as Applied to Medicine at University College LondonReuse content