As a society we are being taught to accept the notion of a "performance indicator" as a valid basis for judging the quality of an institution or service. And there is the implicit assumption that, just as every second division football club will strive to improve its performance in order to get into the first division, so the publication of performance indicators will act as a spur to institutions to try to raise their levels of performance.
However, the analogy with football teams quickly breaks down when subjected to more detailed scrutiny. In most other areas of life it is much more difficult to produce performance indicators which can be as unambiguously measured as the number of matches won, lost or drawn. Equally, in non- sporting contexts it is much more difficult to identify indicators that are truly a reflection of the quality of the service concerned.
And, even if we can be reassured that we are indeed measuring quality in a dependable and meaningful way, there remains the effect of so-called "wash-back" to be considered. It is well known that "what you test is what you get". Whatever indicator is chosen as the measure of the quality of an institution or service is likely subsequently to become the focus of efforts to "raise the scores".
Accountability is important. However, when such information becomes a lever with which to exert explicit pressure for improvement, and when, in particular, it is used as the basis for ranking institutions, this raises the stakes considerably and with it, the potential for abuse.
In these circumstances we need to be sure that the measures of quality we have chosen are robust enough truly to provide for accurate, systematic and useful comparisons. Since the most important features of quality are typically also the most difficult things to measure, all too often we find ourselves using limited and trivial indicators because these are the most amenable to reliable measurement.
This is the context for understanding this week's government publication of league tables of school performance at the end of primary schooling.
At first sight the initiative may seem to be a desirable development for parents. They can now compare the results of a particular school with others in the area, and with local education authority and national averages. Arguably, their choice of primary school will be more informed. Equally, it may be thought - as the Government clearly thinks - that the publication of their results in this way will act as powerful spur to primary schools to achieve higher standards.
If this is so it is surprising that few other countries have caught on to the idea. While many engage in comprehensive testing of pupils at various points in their school career, very few provide for such public comparisons of individual schools, coupled with the freedom for individual parents to use such data to inform their choice of institution.
The decision to publish league tables of school performance in England is based on a number of assumptions. It assumes, first that children's performance at this age (or indeed any other) can be accurately and meaningfully measured, and second, that schools will be spurred to improve their performance, and will be able to do so.
But schools have very different conditions in which to work. Some effective schools with catchment areas in socially disadvantaged areas are always going to be struggling to achieve results that compare favourably with those of schools with very different intakes of children. These league tables are of "raw scores" and do not represent the quality that a particular school may have added. Even where schools may be expected to improve their results, the mere provision of league tables will not tell them how to do this.
But perhaps the most worrying feature of publishing primary school league tables is the fact that the information comes at the wrong time to be useful. It is not designed to guide teachers in their subsequent work with the children concerned. By the time the results are available these children will have finished their primary school careers, whilst the secondary schools to which they will go have little interest in using information which is too late for organisational purposes and too general to guide subsequent teaching.
The publication of the tables represents a missed opportunity for putting in place a national assessment system which has as its prime focus the improvement, rather than the measurement of performance, that informs and empowers teachers with the information they need to respond effectively to their pupils' different needs, and that recognises the power of professional commitment and the demoralising effects of comparisons in which the cards are stacked against you
The writer is head of the School of Education, University of Bristol.Reuse content