Behind The Numbers

The Health Secretary, Andrew Lansley, is keen to measure the outcomes of treatment on the NHS, and there is no more unambiguous outcome than death.

Good hospitals save more lives than bad ones, but how do you get a handle on just how many? Patients differ in the severity of their condition, social class, age, underlying health and a multitude of other ways. Statisticians who believe they can correct for these factors say it is possible to produce outputs that reflect differences of care. Other statisticians say this is impossible.

This week the Department of Health bravely launched its own measure, the Summary Hospital-level Mortality Indicator (SHMI). Behind the move lies the tangled relations between the department and the healthcare analysts Dr Foster, a private company which pioneered this approach in the UK with the Hospital Standardised Mortality Ratio (HSMR), an important element in its annual Good Hospital Guide.

Dr Foster's guide gave results that sometimes contradicted those of the healthcare inspector, the Care Quality Commission. In 2009, as the CQC rated care at the Mid-Staffs Foundation Trust "appalling", the guide listed it among the 10 most-improved hospitals.

The department could have concluded that HSMRs can't be trusted, and people should disregard them when comparing hospitals. Or they could have concluded that with a bit of tweaking and if published with lots of caveats, hospital mortality indicators could be improved sufficiently to save any more red faces.

This is a huge risk. It's one thing laughing off the figures produced by Dr Foster (and Professor Sir Brian Jarman, a top academic in the field, is in its corner) but it will be quite another when a hospital scoring steady points on the department's SHMI measure turns out to be delivering rotten care.

SHMI attempts to remove any possibility of hospitals fiddling the figures. There are at least two ways of fiddling, or "gaming", in NHS parlance. The data for measuring mortality comes from the diagnostic codes recorded for every patient. A patient may have several conditions, or co-morbidities, so may have several codes. Standardised mortality ratios allow for co-morbidities, but that the more codes you can attach to a patient the less impact his death will have on your score.

A second trick is to code the patient as having palliative – end of life – care. This improves a score because hospitals are not expected to save the terminally ill. Some hospitals code as many as 50 per cent of deaths in this category, which is implausible.

The department's review panel proposes new rules to address this but hasn't worked out how to do it. For now, no adjustment will be made for palliative care, leaving the door open for gaming. To reduce the degree to which hospitals game, it proposes cutting the factors corrected for, increasing the risk of confounding.

The department was damned if they did and damned if they didn't. But they are at risk of putting their imprimatur on a measure that can misrepresent the relative qualities of hospitals, as the review admits. Journalists will not ignore the measure or "put it in context". They like a number to encapsulate a complex situation: you can build league tables from it or identify "top hospitals" that are slipping. Now that fun will be at the department's expense.

Nigel Hawkes is Director of Straight Statistics (