They are the best external reference we have in judging universities. The information will enable league tables to be drawn up showing which departments have done well and which universities have shone overall, resulting in a virtual consumer's guide to higher education.
But the ratings say nothing about teaching excellence, and many universities, particularly the former polytechnics, regard teaching rather than research as their mission.
In the largest exercise of its kind, every department in the 'old' universities and many departments in the 'new' universities - a total of 72 different academic areas in 172 universities and colleges - have been assessed for the quality of their research.
The assessment has been undertaken by panels under the auspices of the Universities Funding Council. Their findings are important, as a large proportion of the publicly-funded income a university gets for research will depend upon the rankings. Industry too will only want to put its money into those departments that score well.
The stakes are high because the Government is insisting on a more selective allocation of funds. When the funding council allocates its money for research, nearly 90 per cent of it will be decided according to QR (quality research) - the ratings. Previously, only 40 to 50 per cent of research money was distributed according to the ratings, then called JR (judgmental research). The 'old' universities were assessed in 1986 and 1989.
A university that does badly in several subject areas could see a dramatic fall in its income, and some departments could face a merger or closure as a result.
In the assessments, the top score of 5 means that the research shows international excellence; 4 means national excellence; 3 is national excellence in a majority of the work; 2 is national excellence in up to half of the work, and 1 is none or virtually no national excellence. Any department that gets a 1 will get no money for research from the QR fund, but it might be eligible for a trickle of development money if the council thinks that there is potential for building the subject.
So how were the universities assessed? Academic staff were asked to submit their two best publications, and up to two other forms of public output (for example, articles in journals), in the last 3 1/2 years for science and engineering, and 4 1/2 years for arts and the humanities.
Dr John Wand, head of the research assessment exercise at the UFC, said: 'These nominated publications were the heart of the exercise. That is where you form judgements of quality.'
Were all these publications actually read by all the panel members? Dr Wand said: 'The approach varied from panel to panel. With the smaller subjects, the panellists had either read the book beforehand or made sure they did during the exercise. But with the larger subjects there had to be a different approach.'
For a hospital-based subject, for example, some 4,000 publications could have been cited. 'You look at the journal in which the particular articles were published. There is a rough order. The Lancet is more highly rated than the Chipping Sodbury News.'
The exercise was only concerned with active researchers. Some staff in the former polytechnics, for example, do not perform research and are not expected to. It would have been wrong to judge them for something they do not do.
So institutions themselves decided whether to offer their departments for judgement. Departments not put forward for assessment will not be eligible for research money. Almost all of the departments in the 'old' universities were assessed, compared with 30 to 60 per cent in the former polytechnics.
There were 63 panels, consisting of 450 people. They were chosen by the funding council after advice from subject associations and learned societies. Most were academics at UK universities, but about 10 per cent were academics from abroad or from British industry or commerce. All were selected for eminence in their field.Reuse content