When numbers don’t add up
Too often, we rely on figures which have been handed to us on the platter of a press release, without delving into those numbers to see if they make any kind of sense or deserve the spin which is often put on them.
An example: in 1997, the Government announced it was pumping an extra £300million over five years to create a million new childcare places. Everyone – media, education pundits, even the Opposition – was suitably impressed and generally agreed that this vast amount was Generally A Good Thing.
What no one did was stop and asked themselves: “Hang on – is £300million to create a million places such a big amount?”
On the face of it, yes – but let’s do the maths: £300million divided by a million gives you £300 per place. Divide that by five – the sum is to be spread over five years – and that gives £60 a year for each place. Divide that by 52 – the number of weeks in a year – and you are left with just £1.15 per week. Even with the influx of cheap labour from eastern Europe, you’re unlikely to be able to fund a childcarer on just £1.15 a week.
It’s a good rule whenever you hear politicians spouting vast numbers, whether it’s the Government trumpeting millions to fund childcare places, councillors spouting off about the thousands to be spent on a new leisure centre or trade unionists talking about the hundreds of jobcuts which are to be made, to look at all the facts and ask yourself: “Is that really a big number?”
The same applies to survey results, particularly crime and medical statistics, which can often be twisted and massaged into any result needed.
In this case, competent journalists need to ask themselves not only “Is this a big number/percentage?” but also: “What exactly is being counted here?” For any statistic to have any worth, it must have a clearly defined and identified thing which it being counted – and is it frozen peas or mushy peas?
Early in 2005, the results of a survey which had asked teenage boys what they had been up to in matters such as assault, theft, drugs and other antisocial behaviour were released. The headline writers readily jumped on the results: “YOB BRITAIN!” screamed the Sun, typically, “1 in 4 teen boys is a criminal! Home Secretary: It’s appalling!” Other media, including the upmarket worthies, the BBC and others, also ran with the Yob UK angle.
Now let’s look at exactly what was being counted here. This was a competent survey, and it had clearly defined targets of what was being counted. In the category of assault, for instance, the survey asked: “Have you ever used force or violence on someone on purpose, for example by scratching, hitting, kicking or throwing things, which you think injured them in some way?” All these are, technically, assault, so the survey cannot be faulted on that count. But the question carried an interesting rider: “Please include your family and people you know as well as strangers.”
It turned out that 58 per cent of the “assaults” had been “pushing” or “grabbing”, and 36 per cent were perpetrated against siblings. If big brother grabbed little brother six times in a year, leaving him unscathed otherwise, big bro was counted as a “prolific offender”. If he did it just once a year and left a bruise, he was counted as a “serious offender”, since he had left an injury.
In fact, when you looked at the statistics more closely and what the boys actually admitted to, 75 per cent said they had not pushed, grabbed, scratched or kicked as many as six times in the past year, nor to have done any of those things once in a way to have caused even a minor injury.
Sure, some admitted to have got up to more serious misbehaviour, but it didn’t justify the “1 in 4” headline, because that figure, when looked at closely, was meaningless without knowing exactly what was being asked and counted.
Comparison statistics are also full of numerical minefields which can catch out the unwary journalist. In 2006, at a time when tagging criminals was under fire from the media, the Government rushed out figures which purported to show what a success the tagging scheme was. Of the 130,000 people who had been through the scheme, it said, only four per cent had committed crimes while tagged. This was compared with a recidivism rate of about 67 per cent for newly released, untagged, prisoners.
The first rule of comparison statistics is that they must compare like with like to be worthwhile and actually say anything meaningful. In this case, the Government’s statistics fell at the first hurdle. And more hurdles after that.
1) They didn’t compare like with like. Those who were tagged had been judged by prison officials to be less likely to offend – that was why they were let out in the first place. The released prisoners had served out their sentence in chokey because they had been judged more likely to reoffend, which is why they hadn’t been released early on the tagging scheme.
2) The time periods differed. This was just downright sneaky. The maximum period of tagging during which offences occurred was four-and-a-half months. The period of offences by released prisoners was two years. Obviously there were going to be more.
3) The wrong people were compared. The alternative to tagging is not freedom, it’s being banged up in jail – either you’re let out early with a tag, or you are not let out at all. Since obviously prisoners are in no position to commit crimes against the public while they’re inside, tagging would have come off statistically worse.
It is worth noting that even the normally conservative, hard-to-rouse Royal Statistical Society had some sharp words about the Government’s cynical exercise in manipulative number-crunching.
Recommended reading: The Tiger that Isn’t: Seeing Through a World of Numbers, by Michael Blastland and Andrew Dilnot, Profile