Eight subjects have been examined by teams from the Higher Education Funding Council (HEFCE): five modern languages, Chemical Engineering, Linguistics and Sociology. Their judgements have been recorded in tables rating institutions on a scale of one to four points.
Such rankings can make or break departments in an increasingly discerning student market. Institutions which receive only one point in one of the categories investigated - as Bournemouth University did in German, French and Iberian Languages, and Southampton's La Sainte Union College of Higher Education did in German, French and Russian and East European languages - have to be assessed again in 1997. They face the prospect of losing public funding if they fail again in a year's time.
Both institutions fell down on "learning resources", which includes an assessment as to whether they have enough qualified teaching staff, as well as enough books and learning materials.
Established three years ago, the system has been criticised by academics for being over-bureaucratic and time-consuming. Some assessors have been accused of lacking credibility, or not understanding the nature of the institutions they are probing. Some outside observers, on the other hand, wonder whether the assessment reports are sufficiently pointed to secure change. The system dovetails with an audit mechanism set up by universities themselves to check that institutions have procedures in place to keep a check on quality. These two outfits are to be streamlined by merger next year.
In an interview below, Peter Milton, director of quality assessment at HEFCE, gives his verdict on teaching in British higher education.
Q. What do you make of the quality of teaching in our universities and colleges?
A. The clear evidence from the 23 subjects we have examined since 1993 is that the bulk of the teaching is good in universities and colleges, but it would be foolish to deny that there are some problems in some subjects in some institutions. In the first three years 12 visits, or 1 per cent, resulted in unsatisfactory ratings - which is a tiny proportion.
Of those, all had achieved satisfactory ratings the following year, apart from one which closed down its provision (Music, at Worcester College of Higher Education).
Q. What are the problems?
A. There are some academics who are not very interested in teaching, yet they're employed as lecturers, and there are other academics who don't possess the ability to convey their subjects well. And there is a significant number of lecturers in universities and colleges who have never been taught how to teach.
Q. How do you conduct your assessments?
A. A team of three or more specialists is sent in to visit the institution concerned for three-and-a-half days. They look at the curriculum, teaching, student assessment, student support and guidance, learning resources and quality assurance in a given subject area.
Q. What do you ask students?
A. We discuss all aspects of teaching and learning, but don't specifically ask students what they think of teaching. Student opinion is garnered through official routes, that is, through questionnaires administered by the university. It would be unfair to sit down with a bunch of students after a lecture and ask, "What do you think of that?" It would give them all sorts of opportunities to say all sorts of things which are anecdotal, and we daren't rely on anecdotal information.
Q. How do you choose your assessors?
A. We ask institutions, professional bodies, employers' organisations and subject associations to nominate people. We also advertise in the national press. The purpose is to try to get some kind of balance so that we're not seen to be just using people from higher education.
We're very careful about how we put teams of assessors together, and there is always opportunity for sensible negotiation with institutions over issues such as matching the particular specialisms of the assessors and gender. You can't send a team of men to an institution that has got 50 per cent women students - or you shouldn't. Some institutions are very finicky about what they consider to be a match. People ring up and say things like, "How dare you send so and so? This person is not an 18th- century German historian. This person is an early-19th-century German historian." My private opinion about peer assessors is that it's an innately conservative process, because the practitioners understand the difficulties of their fellow practitioners and are inclined to give the benefit of the doubt to them rather than be as sharp as they might be.
Q. What are the costs?
A. Our costs are pounds 3.6m a year - relatively minor compared with the budget for higher education teaching of pounds 2.2bn. For that we do 265 visits annually. If institutions are doing what they should, their costs of preparation for visits should be relatively small.
Q. Does that represent value for money?
A. Yes, there are already some clear benefits from the process. It has raised the profile of teaching compared to research within colleges and universities to a marked extent; it has led to teaching being built into promotion, to staff development in teaching and learning, and to departments looking more critically at what they do.
Q. Do you think all lecturers should be trained to teach?
A. Yes, I think that would be helpful. It is happening now much more than it used to. If we keep exerting pressure and make teaching appear more important, it will start happening everywhere. Institutions take the HEFCE teaching assessments very seriously because their academic pride is at stake, and the kudos of being judged well by their peers is very important to them.
Q. What do you say to criticism that your reports are bland?
A. The overview reports of subjects do contain generalisations. The individual reports of institutions contain lists of things that the inspectors think are positive and negative. The evidence from a survey we did last year is that they're used a lot by schools, careers offices and students. They do significantly enter the calculations of students going to university in the future.
Q. How do you reply to modern linguists' complaints that your assessments are inconsistent?
A. Inconsistency is inevitable in any human process. We have tried very hard to train our assessors to make them more consistent. With our one- to-four-point scale it is quite difficult to make mistakes, but on occasion we do.
Q. Do you find disparities in provision between the old and new universities - that, for example, students in the new universities enjoy better teaching but worse facilities than those in the old?
A. You can't generalise now as much as you could have done in 1992. There is a blurring. There is no doubt that the old university sector is better resourced. Where the new universities score slightly is in quality assurance. The reason for that is that the old National Council for Academic Awards made sure the polytechnics had quality assurance systems in place.
Q. Are academics being assessed to death, as they claim?
A. The combination of accreditation by professional bodies, audit, assessment and internal processes, conspire to make academics feel they're spending more time on these things than the job for which they're paid. I have some sympathy with that. I think it's incumbent on us to rationalise the current processes.
Q. Will the methodology change in the future?
A. Yes. We see this as being an evolutionary process. Ultimately we must rationalise audit and assessment so that the number of visits to institutions is reduced, because the burden on academics is undoubtedly too great. I am firmly of the opinion that we should be saying, "we have helped you get your quality assurance systems up and running. You should take the responsibility back." There will always have to be some external component, but the heavy-handed method of the current time will have to be modified