Maths tests that just don't add up

Most of the problems in the Government's new world class maths tests for school children can be solved by trial and error. That's not maths - and that's only one of the flaws, says Tony Gardiner
Click to follow
The Independent Online

There is something very fishy about the Government's drive to introduce so-called "world class" tests in maths for pupils aged nine and 13. The suggestion that able pupils should be nurtured is welcome. The current approach could be made to work, provided certain changes are made. But without these changes, the scheme doesn't add up.

There is something very fishy about the Government's drive to introduce so-called "world class" tests in maths for pupils aged nine and 13. The suggestion that able pupils should be nurtured is welcome. The current approach could be made to work, provided certain changes are made. But without these changes, the scheme doesn't add up.

The ages of nine and 13 were chosen in the mistaken belief that existing international studies provide a benchmark at these ages for the top 10 per cent. They don't. Ministers and officials were told this 12 months ago but the age groups were stubbornly retained. Why?

The key question for any scheme which claims to improve the lot of able pupils is: "What comes next?" Suppose a pupil shows up strongly on such a test. What happens? What extra provision is available for those who do well? The Department for Education and Employment either doesn't know or prefers not to say. (Given its fondness for "acceleration", one fears successful pupils may be pushed into early GCSEs.)

For tests at age nine, the question "What next?" imposes demands which primary teachers cannot be expected to meet. It would make more sense to postpone any tests until the end of Key Stage 2, when children are finishing primary education: the question "What next?" could then be dealt with by trained specialist maths teachers at the start of secondary education.

Instead of introducing new tests in the middle of key stages, why not fit them in alongside the end-of-key-stage tests at 11 and 14? This would have other advantages. For example, the statutory tests at 11 and 14 already include "extension" papers which are unsatisfactory - very few pupils "pass" them. Rather than add another unsatisfactory battery of tests a year or two earlier, it makes sense to combine the two and do a better job.

Setting good problems to test the top 5 to10 per cent is hard. Setting problems that can be marked reliably (so that the results are meaningful) is harder still. Those who have been doing both for many years, and who know the difficulties, suggest that one should start by using short, closed questions of a relatively traditional kind and stick, initially, to paper and pencil tests (other styles of question and other modes of delivery could be developed later). Instead, the Department for Education and Employment insisted that the test be delivered by computer; and the test developers preferred open problems of a kind better suited to exploratory classwork than to reliable assessment.

The pre-test in June was a predictable mess. Pupils found the style of problems frustrating and the computer software was a nightmare. The software suppliers misjudged the extent of variations in English school computer networks. Some schools could not load the test problems on to a central server and had to load each machine separately. There were other serious glitches which should have been sorted out before schools had to use the material. For example, if a pupil altered one answer, all other answers were wiped from the screen. Then there was the critical flaw that most of the problems (see example above) could be solved by trial and error. Yet maths is the science of exact calculation.

Outside advice which challenges DfEE assumptions has been systematically ignored or suppressed. The original Qualifications and Curriculum Authority working party produced a critical analysis which has never been challenged. Yet no member of the working party was included in the subsequent Advisory Group, and the working party's report was never copied to members of that Advisory Group.

For 12 months, concerned outsiders struggled to put the case for a review, away from the public gaze. They were ignored. And when it became clear that the June pre-test had been a disaster, they were told: "Nothing can be done. The tests must be self-financing and the right to use them (when developed) has been sold to other countries".

So our best pupils are condemned to sit lousy tests, with no obvious purpose, at an inappropriate age, purely because some official has "sold the rights to use these tests" in advance.

In the Sixties the top 5 per cent of English pupils performed at a very high level in international comparisons. By the Nineties our top 5 per cent were performing very poorly. We need simple ways of improving the curriculum, for ordinary pupils and the most able. We do not need yet more half-baked tests, dreamed up behind closed doors and imposed on schools by those who use the power of the public purse to persuade schools and local education authorities to dance to their tune.

The writer is Reader in Mathematics and Mathematics Education at the University of Birmingham. He is Chair of the Education Committee of the European Mathematical Society. He has just published a series of books "Maths challenge 1-3" (Oxford University Press, 2000) - an extension course for able pupils in Years 7-10

Comments