Friday, 21 August 2009

Dumb and Dumber

As the ritual annual self-flagellation over A level results reaches fever pitch, my mind turned to a story that I came across in the Telegraph a couple of weeks ago. It turns out that universities, amongst the main critics of the rising A level pass rate, may themselves be guilty of ‘dumbing down'.

According to the Telegraph, over the last ten years or so the proportion of students gaining firsts almost doubled, with upper seconds rising by a more modest 8%. The proportion of students therefore gaining ‘good’ degrees rose from 52.2% of the student body in 1997 to 61.4% last year, an 18% rise. The increase is even more marked if we go back to the 1980s when only a third of students gained the top two degree classifications.

In many ways this seems surprising. Universities are constantly bemoaning the declining skills of undergraduates caused by satnav’ A level qualifications. Reports allege that students "lack the basic ability to express themselves in writing", and that there has been an “alarming decline” in numeracy standards. At the same time, the expansion of university education (numbers rising from 23% of under 21s in 1991 towards the government’s target of 50%) must surely imply some dilution of quality. In this light, one would surely expect a reduction in the number of 'good' degrees rather than the large increase we have seen.

It is possible, of course that universities are simply getting better at preparing undergraduates for their final degrees, but the evidence is that more and more universities are making use of postgraduates for teaching rather than their ‘big name’ academics. At the same time an increasingly market-driven environment is also leading to inexorable upward pressure on degree classes.

This pressure comes from two main sources, the external market and the internal one. Firstly, as discussed in an earlier post, university league tables include the proportion of good degrees as part of their ranking – with students being increasingly aware of the tables, a good ranking is important for recruitment and, by extension, funding. A leaked email from Manchester Metropolitan university last year showed how these pressures were brought to bear. The email, to Maths and Computing staff, says: "As a university we do not award as many Firsts and 2:1s as other comparable institutions so there is an understandable desire to increase the proportion of such awards. Please bear this in mind when setting your second and final year assessments, especially the latter." As hints go, this is a pretty broad one.

The internal market also exerts pressures – universities will close down courses that are not ‘economically viable’, essentially meaning having lots of students on them. If you want to keep your job, you need to run a course, and students are unlikely to sign up for courses in which high grades are difficult to acquire, leading to a beauty contest between different courses within degree programmes.

As a consequence, universities' protests at A level grade inflation making life difficult for them begin to ring a little hollow, as the same criticisms are now being levelled at them by prospective employers. Indeed, there have been some reports suggesting that degrees have become so devalued that employers prefer to look at university graduates' A level grades as being a better discriminator than degree class! A worrying development, given the rising cost of studying.

It is perhaps surprising, therefore, that at the very time when A level grades are being benchmarked against the students' GCSE background, to try to mitigate against grade inflation, universities are resisting any attempts to introduce greater rigour into their own procedures - since the demise of the CNAA (after polytechnics became universities back in 1992) each university has set its own standards with no over-arching supervision. Pots and kettles come to mind.

Thursday, 16 July 2009

What is to be done?

In my last post, I expressed my antipathy for the IB, whilst at the same time accepting that A levels, as they stand today, are far from perfect. Subsequently, I got to thinking about what it is that I dislike about A levels, and what might be done.

On reflection, it turns out that there is only one aspect of the current A level system that I really dislike, which is its modularity. All A levels are split into either 4 or 6 ‘bite-sized’ chunks, each of which students can study, sit an exam in and then move on. This was introduced with the best of intentions; to maintain student interest and focus throughout the year, to allow students an early opportunity to assess their standard and to give students a qualification after one year of study, therefore increasing flexibility and allowing more subjects to be studied.

These benefits, however, have come at a price, which is an inability to ask questions which cut across module areas. In some subjects this probably doesn't really matter - in History, for example, an inability to compare Hitler and Henry VIII probably isn't a huge loss (although I stand to be shouted down), but in the three subjects in which I have a personal interest (Economics, Business Studies and Maths), modularity is definitely a problem.

Of the three, Economics is the subject where there are the fewest problems (even so, I enter my own students for an exam board which is effectively non-modular), simply because there are certain underlying principles and approaches which always apply to any area. Nevertheless, the split which most exam boards make between micro and macro-economics is limiting because no exam paper can include questions which touch on both areas. The real world, however, makes no such distinctions – any discussion of (for example) rising oil prices should obviously involve both micro (the market itself) and macro (the impact on national and global economics) elements.

In Business Studies, modularity is faintly absurd. How are students to be tested on marketing, the integrating function of business, in a module that specifically excludes most of the other business functions and knowledge of the external environment in which businesses operate? How can one understand operations management divorced from sales and economic forecasts? Obviously it is possible to study business in such a way, but it is hardly satisfactory.

This is nothing, however, compared with the problems which afflict Maths. For Maths, modularity is an abomination, a cancer which strikes at the very heart of the subject. When I work with my students on Oxbridge entry, I will frequently look explicitly at the application of pure maths and statistical techniques to economics (something that the A level itself excludes). Time and time again Further Maths students are unable to apply techniques such as basic calculus and geometric series to simple economic scenarios.

Once I was flabbergasted and frustrated. But now I understand. These topics all form part of the modules taken in the students' first term of study. This creates two problems. Firstly it was all a long time ago – they have moved on to more specialist topics in calculus - by the time I start wanting to use the techniques, they have essentially forgotten them all. Secondly, because each module contains only certain specific topics, the range of questions which can be asked is very narrow – students know exactly which techniques and approaches will be needed, because they are the only ones that they have been taught for that module. Hence when presented with a real-world scenario which requires maths-based skills, but where they don't know for sure which ones, they are flummoxed.

As someone who loves maths, I find this utterly depressing. In my own (non-modular) A level, I loved not knowing which techniques were required in exam questions – it was a puzzle that had to be worked out – a genuine intellectual test. A levels have been criticised by some for creating a “sat-nav” generation, and whilst this is perhaps a little harsh, I understand clearly what these critics mean.

So, what is to be done? For me, all that is required is an end to modularity. This would give greater freedom over how the course is to be taught and would allow students to be set more interesting challenges in exams, which should in turn discriminate between them better. I am not alone in this, and a new qualification has been launched with these goals in mind (the Cambridge pre-U), one which logically I should therefore like very much. The pre-U has done away with modules, and reverted to the twentieth century world of final assessment after two years.

I have, however, two reservations about switching horses away from A levels. Firstly, the pre-U is a new and elite qualification, with only a handful of the very best independent and state schools offering it. Until we can see how the exam is working and how it is marked, it would be a disservice to students to use them as guinea pigs. Secondly, there has been some criticism that the actual course material is ‘antediluvian’, failing to reflect developments in subjects over the last twenty years or so. Although this is not true in economics, where the course content looks strong, in business and management, the course looks very traditional (and I mean that in a bad way!).

So, perhaps in a few years I will be offering pre-U economics. It addresses my criticisms, and if it works, it could prove an attractive option. Alternatively, if the pre-U starts to gain market share, perhaps A levels will be reformed, reining in some of the worst excesses of modularity and reasserting their ‘gold standard’ status. My gut feeling is that there is going to be major turbulence in independent education over the next few years. We live in interesting times.

Wednesday, 15 July 2009

Well he would, wouldn't he?

This immortal line was uttered in court by Mandy Rice-Davies when it was put to her that Lord Astor denied not only having had an affair with her, but having ever met her at all. I was reminded of this comment when reading David James' critique of A levels in the Independent yesterday, which whilst admittedly being flagged as ‘opinion’, was little more than a thinly disguised advertisement feature for the International Baccalaureate's (IB) diploma programmes, which compete for market share with A levels and GCSEs.

In the article, James wheeled out the well-worn criticisms of the A level system (discussed in an earlier blog) treating them as established fact. According to James, the higher A level pass rate is down to the government’s desire to improve grades so that more students could get to university. This argument is flawed at two levels. Firstly, as argued in my earlier blog, A levels simply set a standard that has to be achieved – as time passes and teaching and learning improve, it is inevitable that more students will achieve that standard. More fundamentally, it misses the point that students don’t need higher grades to go to university – an expansion of university places is sufficient in itself. Suppose that tomorrow the number of university places doubled. Universities would want to fill these places for revenue reasons. All that would happen is that the grades needed to gain entry for courses would fall – there is no requirement for an improvement in A level grades for there to be an increase in university places. Hence James’ argument is logically flawed.

Whilst it is true, as I have argued before, that the A level system is far from perfect, this does not make the IB automatically better. The IB is a minority qualification offered at present by a small number of specialist providers. In a 2005 study (the last year for which full figures were available) IB applicants to UK universities formed 4,599 applications compared with 517, 556 non-IB applicants. This gives the IB a 0.9% market share - how suitable such a minority qualification would be for a wide-spread roll-out across the UK must be open to question, especially given the actual composition of the IB programme.

Taking a personal perspective, as I am wont to do, the IB would have been an unmitigated disaster. I was a mediocre O level student (couple of ‘A’ grades, the rest ‘C’s), but I excelled at A level, going on to win a place at a top university. What made the difference? This is something I have thought about a great deal over the years. Actually doing some revision formed an important part, but more significant is what made me want to revise. At O level, I was forced to take a whole load of subjects that I frankly detested. Anyone for the anti-corn law league and the 1832 Reform Act? Amo, amas, amat, amamus, amatis, amant? Learning the first 20 elements of the periodic table by heart? No, I thought not. The only subjects I actually enjoyed at O level were the conceptual ones – Maths and English, and these were the only ones I did well at. In the sixth form, I therefore ditched all of my O level subjects except Maths, and picked the ones I actually wanted to study (Economics and Geography). A levels were a revelation, as I specialised in subjects that I really loved.

The Achilles’ heel of the IB, for me at least, is therefore lack of choice. The programme, for example, has compulsory Maths (The lowest level allowable includes delights such as vectors, matrices and differential calculus – fine for those who enjoy Maths, but restrictive as a compulsory element I would have thought). Other compulsory elements include a foreign language and a science (I would have been reaching for the cyanide at this point), with six subjects overall (barely fewer than the eight I was forced to suffer at O level).

Whilst it is hardly surprising that the IB will be keen to expand their share – back in 2007, the government was offering a subsidy to state schools of £26,000 towards the cost of conversion (IB registration fees, accredited training and so on), we should be very careful before we throw the baby out with the bathwater. James’ implicit claim is that universities feel the IB to be superior. If this was really true, one might have expected the statistics to reflect this, but the 2005 study (above) showed an application success rate of 67% for all non-IB applicants (including those with GNVQs and the like), but only 70% for the IB – surprising given the way it has presented itself as an elite qualification.

A levels offer both depth and choice to students, giving them the freedom to enjoy their studies and to specialise in the areas they really like. I have seen students who have become completely disillusioned with education because of the grind of compulsory subjects at GCSE spread their wings (much as I did) when allowed the freedom to choose. Whilst the IB might be suitable for some, I have serious reservations about its validity as a replacement for A levels.

Tuesday, 14 July 2009

The Pure Hell of St Trinian's

It was with no great sorrow that I read in the Telegraph last week about the demise of two all-girls schools in Bedfordshire, forced to merge as a result of falling student numbers. Although no doubt recession-related, the closures are part of a seemingly inexorable decline in single-sex education in the UK. In the 1960s there were over 2500 single-sex state schools in the UK, but numbers have fallen to around 400 today. In the independent sector, 130 single-sex schools have merged, turned co-educational or closed. Having observed first-hand the anti-socialising effects of single-sex education, I find this trend a welcome one.

My own story is that at the tender age of 9 (the end of year 4), I left my local primary school to join an independent all-boys day school. When I left primary school, interactions with girls were generally of the pigtail-pulling, skipping rope stealing, chasing around the playground nature and although differences were beginning to emerge, we were largely an amorphous lump of tag-playing children. Three years later I rejoined co-education (at the start of year 8). Things had changed; on the cusp of teenage years, it became rapidly apparent that not only was pigtail-pulling no longer on the agenda, but more fundamentally I had no clue as to what the agenda was.

Perhaps in this I was no different from my class-mates – we all seemed to struggle to interact with girls, but for me the situation was more acute – I had been teleported from a world dominated fundamentally by top-trumps, dirt and rude noises to a maelstrom of emotion and hormones with which I was singularly ill-equipped to deal. How those who are wholly single-sex educated ever learn to cope is beyond me; it is perhaps no surprise to learn that boys educated at single-sex schools are a third more likely to divorce in later life than their co-ed counterparts.

But what of the academic benefits? These seem largely to centre around the idea that girls will be academically more successful in single-sex education. Looking at the GCSE and A level league tables, they are dominated by top independent girls schools – surely a strong argument in favour of single sex education? Not according to Alan Smithers, Professor of Education at Buckingham University and Director of the Centre for Education and Employment Research. Smithers' research suggests that whether or not a school was single sex makes no difference to the students’ educational attainment. Differences in performance were accounted for by the ability and social background of pupils, and head teachers made ‘exaggerated claims’ about the benefits of single sex schools because they were under threat.

Nevertheless, in the US, there is a growing movement in favour of gender segregation, arguing that girls and boys learn differently, and will therefore benefit from single-sex education. There is much debate over the validity of this research. Rosalind Chait Barnett, from the Women's Studies Research Centre at Brandeis university, argues that the studies behind this single-sex revival in the US 'do not meet even the most primitive standards of scientific query'.

To some extent, though, this all misses the point. Proponents of single-sex education generally focus on the benefits to girls – but girls already outperform boys at almost every level in the UK. Girls have outperformed boys at GCSE since records began in 1988. Since 2002, girls have scored more A grades at A level than boys, and girls score more 2:1 degrees or better. Boys now only have the lead in the proportion of first class degrees awarded, and this year could see that bastion breached too. Whilst I suppose proponents could argue that girls would do even better if educated separately (although Professor Smithers would demur), this has to be weighed against the fact that girls and boys are prevented from mixing and learning about one another, and the social damage that this must cause.

In the end perhaps we should leave the final word to those educated in single-sex schools. 40 percent of them would want their own children to be educated co-educationally. I am definitely one of them.

Monday, 13 July 2009

Lies, damned lies and personal statements

When I was 8, Mrs Holmes, my Year 3 teacher, appointed me as Blackboard Monitor (I’m showing my age now). I was the proudest boy in the world, as at the beginning of every session, I made sure that the blackboard was beautifully free of chalk. What a responsible child I was - an experience that has stayed with me forever. Nevertheless, fast-forwarding nine years to when I applied to university, I didn’t include my mighty blackboard-rubbing responsibilities in my personal statement. Obviously not, I hear you say, but my ridiculous example is in fact very little removed from things that students really do include in their applications.

Let us shift our perspective to the University of Hogwarts, Potions department. Professor Severus Snape is reviewing applicants for his undergraduate degree in Alchemy. How much weight will he give to my prowess at Quidditch? The fact that I have grade 8 in the ukelele? That I lived off a diet of rabbit droppings and twigs for a month to win my Duke of Edinburgh Gold? From my understanding of the world of Harry Potter, the answer is: Not much.

This is not to suggest that such activities and achievements are unimportant. Far from it – at d’Overbroeck’s we have spent recent years fundamentally redeveloping our enrichment programme – but there is no point in putting these things onto a university application just because you did them, and even less point in doing them just to put on a university application – who wants frostbite just to put a line on a personal statement that is probably going to be viewed with contempt by Professor Snape anyway; “They go outside? When they could be in here studying? Reject them!!”.

Now whilst few Admissions tutors will be as narrow-minded as the fictional Professor Snape, the point of your application is essentially to demonstrate first and foremost a love of the subject, and secondly that you are a rounded individual (less likely to drop out). It is this second section that seems to fill students with bowel-loosening terror, that makes them take up activities that they hate simply ‘to put them on my UCAS form’. But it is all so unnecessary. The role of this second section is really to demonstrate normality – that you have other interests that you enjoy and that you can talk about them intelligently, explaining what you get out of them.

In recent years, I have had students writing about their interest and involvement in: Debating, cricket, starting up their own business, swimming, going to the gym, reading Biographies, mixing music in their spare time, painting, singing, playing the piano, working in their mum’s shop, volunteering, setting up websites, performing a concert in front of thousands of people and repairing broken old Ford Cortinas (No blackboard monitors, however...). They all did these things because they enjoyed them, and as a result, they were able (with prodding and support!) to write about them in a believable way. Essentially it’s not what you do that matters, just that you do something!

The best personal statements are therefore not those that make a million claims that can never be substantiated – at best these will simply produce a ‘so what’ from the reader – but rather those that can show what makes you tick; if your interests support your degree, so much the better, but that is not their real point. Their point is to show the other side to you, the side that doesn’t read books, but instead goes out into the garage and lovingly restores broken old cars from the 1970s – so much more interesting and believable than yet another first 15 rugby captain.

Thursday, 9 July 2009

University league tables, perils and pitfalls.

xx
"I'm not going to York!"

Little surprises me much these days, but I was slightly taken aback by the vehemence of the student’s objection. I mean, OK the Stonebow centre is a monstrosity, but by and large I had thought York to be a decent enough city when I lived there for a few years as a teenager. What’s more, several of my former students have recently come back to tell me how much they are enjoying their economics course there.

“Why not?” I enquired mildly.

“Because,” came the ringing reply, “They are ranked 23rd in the UK for economics!!!

This was a couple of weeks ago, as we had been beginning the process of university application for 2010, and I had (clearly mistakenly!) suggested that the student have a look at York.

Not willing to leave it at that, I probed further:

“Ranked 23rd in terms of what? What criteria were they were using?”

“Er. They were just 23rd.”

“23rd in terms of the quality of gherkins in the student restaurant? 23rd for the loudness of music in the bar? 23rd for the dress sense of the lecturers?”

“Don’t think so. Dunno really.”

This encapsulates in a nutshell the problem with university league tables. It’s not that the Times (the one used by the student in this case) hadn’t made its methodology patently clear (it had). It is simply that we seem drawn to look uncritically at the rankings like moths to flames.

As the student and I investigated further, it turns out that the Times looks at Research Quality, Student Satisfaction, Graduate Prospects and Entry Standards (the average A level grades of students on the course). The problem is that not all these matter equally to all applicants, some of them are subjective and variable over fairly short periods of time, and others rely on data from some time ago. Whilst we can probably all agree that Oxford and Cambridge are good places to study, it is less clear that it is easy to tell the difference between say York’s economics department in 23rd place (83.3 points) and Bristol’s in 11th (86.6 points), a difference of 12 places for 3.3 points. Looking at the two, we find:

xxxxxxxResearch xxxxEntry xxxSatisfaction xxxProspects xxxTotal
Bristol: xxx 3.9 xxxxxxx457 xxxxxx70% xxxxxxxxx86% xxxxxx86.5
York: xxxxx3.2xxxxxxxx457 xxxxxx74% xxxxxxxxx72% xxxxxx83.3

York’s score is therefore lower than Bristol’s because when research quality was last assessed (2008), Bristol’s was better (whether this makes much difference to undergraduate students, however, is a matter for debate) and secondly, better ‘Graduate Prospects’, an item based on the job destinations of 2006 and 2007 economics graduates (a relatively small sample), and where the methodology behind the calculation was unclear.

What about all the other relevant factors? Bristol is a city-based university, York is on a campus. Bristol is a big city, York isn’t. What about accommodation, student life, safety and a million other factors which cannot possibly be crammed into the economics subject league table?

And the problems don’t just end there. Other league tables exist too, measuring slightly different factors, but on occasion producing wildly differing outcomes:

  • Edinburgh: A poor 20th= in the Times, but a cracking 4th in the Guardian.
  • East Anglia: A respectable 13th in the Times, but fails to trouble the Guardian’s scorers at 49th.

Who should we believe? Are we diligent enough to check the methodology?

I hope so, but the reality is that laziness leads to far too many students making decisions based on a relatively narrow and subjective set of data, a decision that will make a huge difference to their next three years and possibly the rest of their life. At the very least, it could be an expensive mistake, with 1 in 5 students failing to complete their degrees.

League tables are obviously helpful, but an understanding of their limitations is crucial. They should form only a small part of any decision – anyone who hasn’t actually been to visit universities, spoken to some current undergraduates, and thought carefully about exactly what they want, but has instead simply worshipped at the altar of league tables, is far more likely to be in the 1 in 5 that drop out than the 4 in 5 who complete, with a debt of around £22,000 to repay for their troubles; A high price to pay for laziness.

Wednesday, 8 July 2009

A* won’t turn the tide

In response to the concerns of the rising tide of top grades at A level (discussed in an earlier post), the government has decided to introduce a new A* grade at A level from June next year. The grade will be awarded to those scoring 90% in the second year exams (the current A grade requires an average of 80% over the two years). The aims of this are probably two-fold:

* To separate out the top performers for university selection
* To prevent students who perform very well in the easier AS year from being able to coast in the second year.

Whilst it may well achieve the latter of these two goals, I have doubts about its efficacy in terms of the former. Essentially, it is little more than a stop-gap. Much in the way that Canute tried to turn back the sea 1000 years ago, so too the A* will eventually become overwhelmed. We need only look at GCSEs to see the future of A level. When A* was first introduced to GCSEs back in 1994, 2.8% of candidates nationally achieved it. In 2008, the figure was 6.8%, almost a 150% increase over 15 years. The same inexorable fate must surely befall A levels over time.

Nevertheless, one might argue that in the short run, it will allow universities to identify the very best of all candidates. True, provided that we can rely on the quality of marking of scripts at the very top end. Public exams are inevitably marked by large teams of examiners, and there is consequently scope for variation within the marks that they award.

The standard tolerance for an examiner in a team is 5% of the total mark for the paper. In other words, if the exam is out of 60 marks, and I would give the script 37/60, an examiner within my team is within tolerance if they award a mark between 34 and 40. It doesn’t take a genius to work out that 34 and 40 out of 60 would lead to very different grading outcomes. The big problem for A* is that in my experience, examiners are far more consistent on the marks awarded to standard scripts lying between the E and A borderlines than outside them because they see far more of those scripts. It is clearly the case that there will be far fewer of the very best and very worst scripts. Therefore, because examiners see fewer of the best scripts, the scope for variation is probably greater, which in my opinion seriously undermines the reliability of the proposed A* grade.

What does this mean for my own suggestion of publishing the raw mark achieved by candidates? The same potential for error would still be there, but the starkness of the gap between A* and A would not be. If, as a result of poor marking, I score 88% instead of the 91% that I deserve, then that is unfortunate, but far less unfortunate than being given an A rather than an A* - at least the university can see clearly that I am not a borderline A/B candidate, and if they are aware of the degree of uncertainty inherent within A level marking (as some of the research done on marking suggests they should be), then maybe I have more of a chance.

On this subject, one of my colleagues (in the comments on my earlier blog) took issue with my sporting metaphor, arguing that whereas there is no limit to performance in the long jump, exams are all ultimately limited to 100%, and therefore even my proposals would eventually fail, as all top candidates end up scoring between 99.9 and 100% (much like figure skating, in fact). Whilst I might take issue with the argument that there are no limits to human performance (in the 2212 Olympics, the gaps between the top long-jumpers may require a micrometer to measure, as they all cluster around 11 metres!), I nevertheless take his point. Even raw scores are subject ultimately to grade inflation, and this perhaps is why many universities have started administering their own entrance tests.

In many ways, of course, this is a throw-back – Oxford and Cambridge used written entrance tests until 1995 and 1987 respectively – but their introduction is now becoming far more widespread – Imperial has introduced its own admissions tests, and BMAT, LNAT, HAT, TSA are now compulsory acronyms for top performing students. One of the reasons that Oxford and Cambridge abandoned entrance tests is that they were felt to give an unfair advantage to applicants from the Independent sector, who would be better prepared. Ultimately, though this argument doesn’t stack up – better preparation leads to higher A level grades in the Independent sector (50.4% A grades at independent schools in 2008 compared with a national average of 25.9%) giving those candidates an automatic advantage in applications anyway.

What we all want is for the best universities to admit the best students, and if universities running their own admissions tests can assist in this goal, then we should welcome it as a way of increasing fairness into university admissions that the introduction of an A* grade is, in my view, unlikely to do.