No, this column will this week tread where angels fear: into the midst of the undignified bunfight now going on among our economic forecasters. As the home of the Golden Guru competition for ranking the annual efforts of our forecasting fraternity, this column now feels like the referee of a junior school rugby match whose charges are getting a little out of hand. Anxious parents on the sidelines wonder whether the boys might do serious damage to each other.
The ding-dong began with Tim Congdon, one of the Chancellor's seven wise men and a believer that the most important thing to watch in forecasting economic developments is what happens to broad money (notes, coin and bank accounts). Congdon combines a messianic streak, a belief that he should not hide his light under a bushel, and a willingness to cross the road for a good scrap. It is an inflammatory mix.
Congdon charged his colleagues with lack of understanding of basic economics, professional incompetence and much besides. The real blow was the charge that the tax- funded academic institutions which run big computer models of the economy - the London Business School and the National Institute for Economic and Social Research - had consistently produced end-year forecasts worse than his own. The said forecasts are those compiled by this column's Golden Guru competition, which Congdon has indeed won.
The counter-attack began with a piece in the June edition of the London Business School's economic outlook in which David Currie belaboured broad money as an unreliable indicator whose advocacy has been 'accompanied by exaggerated claims for Tim Congdon's forecasting prowess that does not bear close examination'.
The LBS knows about broad money, because its economists once proselytised targets for its control. Sir Terence Burns, now Permanent Secretary of the Treasury; Alan Budd, now Chief Economic Adviser; and Bill Robinson, former special adviser to Norman Lamont, are all LBS alumni. They all became convinced that their original hopes for broad money had foundered on the financial liberalisation of the Eighties, which caused the indicator to soar despite a slow-down in inflation.
None of the sophisticated tests that econometrics can deploy to discover relationships between one economic variable and another - in this case, between broad money and total spending - support Congdon's belief in its significance. But what, he is then entitled to argue, of his relatively good performance as a forecaster in the Golden Guru stakes?
This is where the LBS crew has launched another counterblast. In a paper delivered at the Warwick macro-economic modelling group's conference on Thursday, (Andrew Burrell and Stephen Hall, 'A Comparison of Short-term macro-economic forecasts', LBS, June 1993 (mimeo)), Andrew Burrell and Stephen Hall looked in detail at whether the commercial, City forecasters such as Congdon outperform the academics, and whether there is an industry 'leader'. Does the City have the edge because it is closer to financial markets and commercial realities?
They are not happy taking the end-year forecasts that I use in the Golden Guru award. The big academic institutions produce four forecasts a year, whereas the City teams can update every month. Thus an end-year forecast produced by a City team in December has a headstart over an end-year forecast produced by an academic team in October, merely because the extra two months of data for retail sales volumes, trade, manufacturing output and the national accounts can be crucial.
Their graphs, shown here, compare each forecast with what actually happened for growth and inflation. The heavy line for growth and inflation has been brought forward by a year so that the predictions can be directly compared. For simplicity's sake, I have excluded all the forecasters except the key protagonists, but the cognoscenti can get the full flavour in the original paper.
Burrell and Hall acquit the academics of particular incompetence, but only at the expense of besmirching everyone else's reputation too. Yes, the commercial sector forecasts for the end of the year are better, because they have later data. When like is compared with like, October with October, Burrell and Hall say that 'the most striking feature of the overall pattern of the forecasts is their similarity'.
Forecasters tend to seek safety in numbers, since they can then claim to be wrong in the company of all their fellows, and cannot be blamed. If they stand outside the pack, it makes sense to stand only a little way outside as they will gain little more credit for being more outspoken.
True, some forecasters broke away when they predicted a recession in the last part of 1990, but this is not much consolation. 'It should be remembered that by the time of these recession predictions, it was nearly six months since the contraction started.' In other words, even the luckiest and best forecasters of the recession were really only any good at backcasting.
The LBS's second conclusion is that the predictive errors appear to be distributed uniformly throughout the profession, and certainly do not favour either City teams relative to academic ones or monetarists relative to mainstream or Keynesian ones. The largest mistakes were for the contraction of 1991: most forecast a 2 per cent rise in output when it dropped by 2 per cent instead. (Tim Congdon is not immune: he forecast growth in both 1991 and 1992, although output fell).
Even the most heavily researched forecasts have been poor. For example, the Paris-based Organisation for Economic Co-operation and Development admits, in its latest Economic Outlook, that its forecasts for the big four European economies in the period 1987-1992 have been worse than a projection based on a 'random walk' - or a projection which merely said that next year would be the same as last year. Moreover, it says that IMF and government official forecasts have been even worse.
The moral? I am increasingly convinced that economists should develop different means of forecasting that examine the impact of likely shocks and changes in behaviour, and project a range of probabilities. There should be more research into possible trend-benders, and less computerised extrapolation from history. This is a technique that the French call prospective, and it has the merit of being more honest about what we can foresee. A pretension to certainties that are constantly confounded only makes an ass of the whole profession.