Wednesday, 8 July 2009

A* won’t turn the tide

In response to the concerns of the rising tide of top grades at A level (discussed in an earlier post), the government has decided to introduce a new A* grade at A level from June next year. The grade will be awarded to those scoring 90% in the second year exams (the current A grade requires an average of 80% over the two years). The aims of this are probably two-fold:

* To separate out the top performers for university selection
* To prevent students who perform very well in the easier AS year from being able to coast in the second year.

Whilst it may well achieve the latter of these two goals, I have doubts about its efficacy in terms of the former. Essentially, it is little more than a stop-gap. Much in the way that Canute tried to turn back the sea 1000 years ago, so too the A* will eventually become overwhelmed. We need only look at GCSEs to see the future of A level. When A* was first introduced to GCSEs back in 1994, 2.8% of candidates nationally achieved it. In 2008, the figure was 6.8%, almost a 150% increase over 15 years. The same inexorable fate must surely befall A levels over time.

Nevertheless, one might argue that in the short run, it will allow universities to identify the very best of all candidates. True, provided that we can rely on the quality of marking of scripts at the very top end. Public exams are inevitably marked by large teams of examiners, and there is consequently scope for variation within the marks that they award.

The standard tolerance for an examiner in a team is 5% of the total mark for the paper. In other words, if the exam is out of 60 marks, and I would give the script 37/60, an examiner within my team is within tolerance if they award a mark between 34 and 40. It doesn’t take a genius to work out that 34 and 40 out of 60 would lead to very different grading outcomes. The big problem for A* is that in my experience, examiners are far more consistent on the marks awarded to standard scripts lying between the E and A borderlines than outside them because they see far more of those scripts. It is clearly the case that there will be far fewer of the very best and very worst scripts. Therefore, because examiners see fewer of the best scripts, the scope for variation is probably greater, which in my opinion seriously undermines the reliability of the proposed A* grade.

What does this mean for my own suggestion of publishing the raw mark achieved by candidates? The same potential for error would still be there, but the starkness of the gap between A* and A would not be. If, as a result of poor marking, I score 88% instead of the 91% that I deserve, then that is unfortunate, but far less unfortunate than being given an A rather than an A* - at least the university can see clearly that I am not a borderline A/B candidate, and if they are aware of the degree of uncertainty inherent within A level marking (as some of the research done on marking suggests they should be), then maybe I have more of a chance.

On this subject, one of my colleagues (in the comments on my earlier blog) took issue with my sporting metaphor, arguing that whereas there is no limit to performance in the long jump, exams are all ultimately limited to 100%, and therefore even my proposals would eventually fail, as all top candidates end up scoring between 99.9 and 100% (much like figure skating, in fact). Whilst I might take issue with the argument that there are no limits to human performance (in the 2212 Olympics, the gaps between the top long-jumpers may require a micrometer to measure, as they all cluster around 11 metres!), I nevertheless take his point. Even raw scores are subject ultimately to grade inflation, and this perhaps is why many universities have started administering their own entrance tests.

In many ways, of course, this is a throw-back – Oxford and Cambridge used written entrance tests until 1995 and 1987 respectively – but their introduction is now becoming far more widespread – Imperial has introduced its own admissions tests, and BMAT, LNAT, HAT, TSA are now compulsory acronyms for top performing students. One of the reasons that Oxford and Cambridge abandoned entrance tests is that they were felt to give an unfair advantage to applicants from the Independent sector, who would be better prepared. Ultimately, though this argument doesn’t stack up – better preparation leads to higher A level grades in the Independent sector (50.4% A grades at independent schools in 2008 compared with a national average of 25.9%) giving those candidates an automatic advantage in applications anyway.

What we all want is for the best universities to admit the best students, and if universities running their own admissions tests can assist in this goal, then we should welcome it as a way of increasing fairness into university admissions that the introduction of an A* grade is, in my view, unlikely to do.

No comments:

Post a Comment