After an emergency admission for a stroke, the chances of dying within 30 days are nearly twice as high in the Falkirk Royal Infirmary than they are at Western General or in the Borders General hospital's acute unit.
If you fall and break the neck of your femur - a common injury in the elderly - you have more than twice the risk of dying in the first month if you are admitted to the Dumfries and Galloway Trust than if you are admitted to the West Lothian district general hospital.
These figures are the first official death rates to be published for British hospitals. Last month, the news that they were to be produced led to the dramatic headlines - "NHS To Run Death Rate Leagues" and "Hospital Deaths Outrage" - as doctors protested. Some Labour MPs rushed to condemn the Government's "league table mentality" for stretching even to the grave.
Doctors, particularly south of the border, expressed alarm that people would be misled and patients worried if crude death rates were published. Viriginia Bottomley was said by some to want nothing to do with similar figures being published for England -despite the fact that outline work on hospital death rates was already under way for a string of common conditions.
Yet when the figures appear in Scotland today, among doctors at least their publication looks set to cause much less of a stir. Newspapers will be able to construct dramatic headlines about variations in hospital death rates, but what has been produced is something a good deal more sophisticated than a simple death rate league table. The dense and detailed document, Clinical Outcome Indicators, lists 17 indicators of which only three directly involve hospital death rates.
Seven of them, covering Scotland's 15 health boards detail items from teenage conceptions, to abortion, to cervical cancer mortality, suicides and measles. Three more cover psychiatric hospitals, including deaths and suicides within a year of discharge. Another seven cover 30 of Scotland's leading acute hospitals and provide the figures for the three death rates listed above, plus other indicators such as how quickly patients are discharged after fractured femurs or a stroke, re-operation ra t es after prostate surgery, and emergency re-admissions after discharge from medical care.
The data is not arranged in league tables - although half an hour's work would allow one to be constructed for any of the indicators. The figures are also not crude death rates, but somewhat more sophisticated: they are adjusted for the age and sex of the population they cover. And they are qualified all over the place, dotted, in the report's own words with "health warnings".
Deaths after heart attacks or strokes, for example, the report warns, are just as likely to be due to differences in patients themselves as to differences in the quality of the treatment they have received. In the hospitals with the worst results, those
treated may have been iller and poorer, have suffered more complicated conditions, have smoked more, or come from worse housing or working environments, compared with those hospitals with the better results.
For all these reasons, the report warns, "it would be wrong to conclude ... that one hospital provides better treatment than another, or that one health board's patients receive better care than those of its neighbours".
So why publish them?
The case for the defence, ironically, is best made not by the managers who will eye this data and wonder whether they should be switching contracts between one hospital and another, but by the doctors who can see the case for producing them.
Dr Angus Ford is chairman of the Scottish Joint Consultants Committee, a body that embraces the views of both the British Medical Association - the trade union side of the medical profession - and the Scottish medical Royal Colleges, whose business is toprotect standards. He admits that the doctors, who you might expect to feel profoundly threatened by their death rates being published rather than buried, were very apprehensive at first. They feared that official league tables of death rates would be used as crude comparators between hospitals "If these were used wrongly, they could cause a great deal of alarm," he says.
But the document - drawn up after widespread consultation with the medical profession - is sufficiently qualified to calm those fears. And the use to which it will be put, Dr Ford believes, is in a sense "no more than building on a long-established tradition in medicine of comparing results to improve treatment".
For the indicators are actually about the quality of care. All the targets and Patients Charters since the NHS reforms have been about how long patients have to wait, how fast they are treated, how much they cost to treat, Dr Ford says.
"These are all managerially determined targets," he continues, "and until now they have had no relationship with the outcome of treatment. This is a first preliminary step towards measuring outcomes and producing better informed purchasers of health care. What you want the purchasers to buy is the best-quality health care. That is the most important thing from the patients' point of view."
So where the figures show big variations in death rates, where a trust seems to fare badly, doctors and managers will be under pressure to ask why. Why, for example, does the hospital take more than its share of difficult cases? Are the heart disease results bad because the local population smokes heavily? Can something be done about it? Are there better treatment regimes, better organisation, possibly even better trained doctors in the hospitals with the better results?
"I don't think doctors need feel threatened by this," Dr Ford says. "Any doctor who looks at death rates and finds his or hers are higher than the others would want to determine why that should be." In the last analysis, however, such figures may lead tocontracts being shifted - an unpleasant outcome for the hospitals and doctors concerned, but the best outcome for patients if it produces better treatment.
South of the border, these arguments will feel less comfortable for doctors than they do in Scotland. Outside Glasgow there is limited room in Scotland for competition between hospitals. The Scottish Office has also been clever in the first mortality indicators it has picked: those suffering from heart attacks, strokes and broken femurs are not likely to be in a fit state to demand that the ambulance takes them 40 miles down the road to ensure they get better treatment at another hospital.
The initial figures will thus clearly be used in a attempts to improve services rather than switch contracts. Further down the line, however, death rates may be published for cancers and for other treatments where there is a more realistic possibility
of patients - and those who buy services on their behalf - seeking to use hospitals with better results.
Far better, however, from both the doctors' and patients' point of view, that contracts are moved on grounds of quality than of price alone. And measuring death rates is one way, though only one way, of measuring that.Reuse content