April's speech by John Patten, then Secretary of State, to the Higher Education Funding Council for England (HEFCE) reflected a line of thinking that has ruffled feathers throughout the sector. Mr Patten revealed that he had been having 'informal discussions' with 'representatives' of the Higher Education Quality Council (HEQC, the quality-assurance body) about the need 'to place much more emphasis on broad comparability in the standards of degrees offered by different institutions'.
The HEQC, more especially through its academic audit arm, the Division of Quality Audit (DQA), is one of the success stories of British higher education in recent years. The DQA operates a system of peer review of quality-assurance mechanisms and commands wide respect, not least because it is non-confrontational; its reports help institutions to improve their systems through a species of critical profiling, not league tables.
If Mr Patten's April exhortation was accepted by the HEQC, academic auditors would have a new, supplementary agenda when they do their rounds. They would politely inquire of institutions what mechanisms they had in place to ensure 'broad comparability' of standards.
Cambridge University would be asked how it ensured that a First in its History Tripos was of the same standard as a First in History at, say, the University of Derby. Derby would be asked a similar question. If either institution said it had no such mechanism, it would be asked why not. The replies would make fascinating reading.
Institutions are also considering their responses to the HEFCE's consultation document on quality assessment. Since the 1992 Education Act, the HEFCE has been assessing the quality of education on the basis of a rolling programme of subjects. Institutions must submit 'self-assessments', to which they may append 'claims for excellence'. Visits are made, and reports of these may be purchased by members of the public.
Quality audit is concerned with process. Assessment is concerned with quality. Assessment results not merely in a report, but also in a grade: unsatisfactory, satisfactory or excellent. League tables have already appeared. Institutions in the former polytechnic sector had been subject to visits by government inspectors; except in relation to teacher education, Her Majesty's Inspectorate never visited universities.
In the 'traditional' universities, the prospect of assessment visits, albeit by fellow academics, was viewed with trepidation; 'the revenge of the polys' was how one colleague put it. But matters have not turned out that way.
An analysis of the first 120 assessment reports published by the HEFCE, covering history, law, chemistry and mechanical engineering, reveals that of the 49 gradings of excellent, only seven went to non-traditional universities; in history, no former polytechnic obtained this grading. Moreover, the lion's share of 'excellents' has gone to departments given the highest gradings - five or four - in the 1992 research rankings exercise.
If the Government hoped assessment visits would provide support for the view that excellence in teaching can be divorced from excellence in research, it must be deeply disappointed. Many former polytechnics feel that assessment teams, in spite of protests to the contrary, have applied a 'gold standard' derived from the old-established academic traditions, and have not paid enough attention to diversity of mission.
The circumstantial evidence for this view is certainly strong. But my own feeling is that a much subtler set of mechanisms has been at work, especially when we remember that care is taken to ensure that assessment teams are mixed, drawing members from old and new universities.
I have more than a sneaking suspicion that what we are looking at is fundamentally an issue of resource, that it is the better-resourced departments that are being graded as excellent, and that it is because they are better resourced that they are likely to have achieved higher research rankings.
The Government justifies teaching assessment partly on the grounds that the ranking of departments will inform student choice. I very much doubt that it will do any such thing: the fault-lines in the present methodology run too deep.
The Government says it is looking for value for money. The expanded university system is looking for money for value. If Mr Major is sincere in wanting a quality higher education system embracing 'broad comparability' of standards, he must understand that he cannot have it on the cheap.
Professor Alderman is chairman of the Academic Council of the University of London.