Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Inside business: Scientists grapple with the X factor

Hugh Aldersey-Williams on the application of benchmarking to research and development

Hugh Aldersey-Williams
Saturday 07 September 1996 23:02 BST
Comments

Suddenly, science means business. Royal Society initiatives, such as the City Science and Technology Dialogue and the Science Project, aim to promote understanding between the science and business communities and to increase funding of the former by the latter.

The transfer of the Office of Science and Technology from the Cabinet Office to the Department of Trade and Industry implies greater emphasis on "relevant" scientific research. Its Technology Foresight programme aims to prioritise areas for funding. The research councils are beginning to appraise proposals for their potential for wealth creation as well as scientific merit.

The issue of the commercial accountability of scientific research even lies at the heart of Blinded by the Sun, a play by Stephen Poliakoff that opened at the National Theatre last week. "It's important to learn how to evaluate R&D but there is no commonly agreed way to do it," according to Dr Robin Fears, director of science policy at SmithKline Beecham pharmaceuticals, who organised a conference on the subject last November. Any assumption that there are straightforward correlations between funding and discovery and between discovery, application and the generation of wealth worries scientists. "History provides ample evidence to the contrary," says Dr Peter Williams, the executive chairman of Oxford Instruments and chairman of the Particle Physics and Astronomy Research Council. "There is an equation involving cash and discovery but it is a non-linear one."

Nevertheless, companies whose prospects rest on the quality of their scientific research have reason to investigate benchmarking. Stock prices, the conventional index of performance, are fickle in reaction to scientific results. Other measures, such as the ratio of products in research to products on the market, or the number of patents obtained or papers published, are also suspect, especially if company boffins grow more used to achieving these targets than doing good research.

The pharmaceuticals industry provides a sharp test for benchmarking. Here, the market imperative is strong and the science is highly curiosity- driven.

"It is unwise to carry out in-house research in a tentative way," says Gerard Fairtlough, the chairman of Therexsys and founder of Celltech, the biotechnology darling of the 1980s. The biggest players cannot afford to neglect research but they are growing more averse to risk. So they need either to offload the risk, by outsourcing research, or to gain an accurate measure of it.

Start-ups have good reason to measure their progress, too. Analysts sometimes look at the number of clinical tests in progress. But this measure is also subject to distortion. Benchmarking might enable these companies to make more realistic growth forecasts and cash-flow plans.

Companies such as Parexel International, one of the contract research organisations now servicing the demand for outsourced research, also have an incentive for benchmarking and a varied pool of projects to compare. Programmes of research are often sufficiently different that measures of overall progress are of little use. But some individual parameters may be usefully compared from project to project, says Philip Harrison, Parexel's head of medical affairs.

In conventional benchmarking the greatest gains sometimes come from making apparently wild comparisons, as when companies have improved turnaround times by studying pit-stops. But not here. "The more different the drugs, the more unhelpful the comparison," says Mr Harrison.

Simple measures may be most effective. Challenged to increase the output of Unilever's Port Sunlight laboratories in the 1980s, Richard Duggan, now senior innovation adviser at the DTI, gave his scientists pagers. Every time they bleeped they had to write down what they were doing. It turned out they were spending less than 20 per cent of their time actually doing science.

Mr Duggan was able to set targets for them to double or treble this and also slash the paper use symptomatic of bureaucracy.

Oxford Instruments signed up for the Time to Market Association's inter- company comparison of the speed of product development. Initial comparisons were structural. The company learnt, for example, that its research teams were less cross-functional than Hewlett-Packard's and made changes accordingly. But Mr Williams warns: "There's no way to get a quantitative handle on individual research programme quality. There's no generic knowledge that comes out. A lot of gut feeling is involved."

Many companies have a panel of scientists to call upon for advice. A few, such as Smith & Nephew, use them to assess as well. The process is similar to academic peer review but far more frequent.

"Extending this assessment into comparison in order to establish best practice is a current preoccupation of many research-oriented companies," says Professor Alan Suggett, group director of R&D.

He says that such measures should not be used in isolation as a basis for picking winners. That risks discarding programmes with latent potential. Hard data should be tempered with intuition. Many research directors would applaud John Robinson, the chief executive of Smith & Nephew, when he says: "We have to expect to be wrong on some of our programmes, otherwise we aren't being adventurous enough."

Next week: how benchmarking can assist innovation in the design process.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in