This TRL report looks shakier than most. It was published only the day before Jersey's vote, which limited the opportunity for peer review. Two responses:
I've not paid TRL to read it(!) but I see now that it's available on the CTC link above. So here's my first thoughts, written as I read it. Then I'll read the CTC and Guardian articles and see what I missed.
In section 1.2, they seem to dismiss all
public attitude surveys, claiming that attitude does not clearly affect primary public health outcomes. This seems a bit suspicious to me when the executive summary seems to suggest that they're going to argue that Jersey is different to Australia and won't see a similar reduction in cycling. How are they going to do that without attitude evidence? We'll see.
The literature search (section 2) claims to use web-based literature search tools and three databases (TRID, ScienceDirect and PubMed). Not awful, but there are already lists of oft-cited helmet studies which I think it would seem nice to include. I'll see if I spot any famous studies being missed out. It also excludes anything published before 2008, arguing that a previous 2009 TRL review already covered them. Let's see if any findings from that 2009 review are mentioned later.
The search terms (table 2.1) look OK, although I'm surprised to see attitude terms included after what happened in section 1.2.Helmet effects
Next comes a chunk on helmet design which seems largely unsupported by evidence. I'm no physicist or materials scientist, so I'm not sure if this is accurate. As I've written before on this site, I'm concerned about rotational forces and strangulation, but those don't seem to be mentioned. There's a claim that ventilation and aesthetics are important to users, which seems odd because I don't remember anyone ever telling me they wear a helmet for the ventilation or look. Then it says that they're concerned with three types of head injury: cranium fractures, focal brain injuries and diffuse brain injuries. Anyone with medical knowledge able to tell me if they're omitting anything there? Personally, I'd want to look at neck injuries too - but that's my personal anecdote - and look at some injuries which helmets should not affect as a control.
Section 3.1.3 cites a test from Cripton et al 2014 using a disembodied dummy head as proof of performance and concludes with a claim that a 7.7 m/s impact is unlikely to happen in the real world. That's only 17mph. How slow are Jersey cyclists?
Aside: Figure 3-2 has "Probability of sever brain injury" on one label - another symptom of hasty production?
There's actually some interesting mention of two 2013 studies (McIntosh et al and Olivier et al) about angular acceleration. The study which they cite in favour of a reduction (McIntosh et al) also might be saying it could increase because of the change in centre of mass. It also mentions HIC15
and I'm not sure what that is. It doesn't seem to be expanded on first use - another editing error?
Crushing is covered but there's some confusing mention of a maximum test force of 470lbf followed by saying they crushed a skull with 520lbf. Is crushing an interesting factor? Helmet promotion that I've seen seemed to focus on impacts. I thought that was because if your head was getting crushed, something nearby was probably crushing your body, rendering the helmet irrelevant to your survival.
An analysis of pedal cycle casualties on Jersey is rather curious. It says "The most common injury location was to the upper and lower limbs and head/face" then the table reveals that this is split into upper limbs, lower limbs and head/face, with upper limbs over twice as frequent as any other category. It then mentions some helmet usage data, but doesn't link this to injury site category. Was that data not present?
Rather than analyse the local data set they presumably have, the report then moves onto 2004-2008 data from Victoria, Australia. It describes the most common crash types for children there as being struck by vehicles emerging from driveways (41.6%) and being struck when emerging from a footpath into the path of a vehicle (32.3%). Somehow it then concludes that non-traffic incidents are the primary type of collision to protect children from. How can that be, when arithmetic suggests it could be 26.1% of incidents at most? I think that ignoring the two most common crash types looks like a massive logical disconnect.
It does move onto data sets from Israel and UAE which seem to suggest simple falls are by far the most common there, plus data from Finland (an all-age dataset which it seems to disclaim slightly by stating 31% were alcohol-related) and Hong Kong (where large quotes are used which raise more questions than answers, like what is "younger" and "older" in them).
Then there's mention of three computer simulation studies and an accident investigation study. The accident investigator sounds to me dangerously biased pro-helmet, describing helmet use as "unsatisfactorily low". One computer study looked at falls caused by skids and kerb-strikes, while the second looked at handlebar wobbles, kerb-strikes, side-impact by car and rear-impact by car, then the third looked at handlebar wobbles, falling off a kerb, hitting a wall and side-impact by car. It's disappointing that none of these match up with the most common crash types for children (or adults for that matter), plus I suspect they'll be using adult models on the bike because it's not stated.
Finally, there's hospital admission data from the Netherlands, the USA, France and New South Wales plus police data from Victoria and New South Wales. I'm rather unhappy that selected figures are quoted in prose rather than fuller data in tables, because both of those data sources have well-known biases towards more serious injuries. There's also different health systems (public and private insurance, for starters) which may influence hospital attendance.
Are there really no new studies using car-style full-body crash test dummies? That's always stuck me as an interesting route to explore.
The next section contains some data on helmet fit and its effect. There's a study (Hagel et al 2010) which claims to give patterns of incorrect helmet use based on field observations cited just after a paragraph which gives a number of fit characteristics (such as "more than two finger breadths space between the head and the helmet") which I don't think you can do by merely observing a cyclist, can you?
There's an interesting titbit (Romanow et al 2014) estimating the increased risk from poor fits of various types, but again that's reported as being from hospital data so I'm sceptical that they can estimate the pre-crash fit accurately. It's also interesting that asphyxiation when not bicycling is mentioned. I wonder if that will appear again in any summary/calculation.
The summary starts off as a summary, then appears to cite selected uncorroborated single-source statistics as definitive and global, such as one about 47.5% of cyclists 18-or-under being injured at home. That one leapt out of the summary as incredible. Looking back, the cited source Mehan et al 2009 is the USA hospital data which I already criticised above. The "summary" continues like that, a pick-and-mix of numbers from different places and methods, often without stating any limitations. I think that summary should be considered harmful.Legislation effects
The next section looks at places which have legislated. Most of the first part feels unsurprising to me, like legislating increases usage rates, plus the police reports think most riders wore a helmet. The second part looks at injuries and strangely complains for most of the first page that it's difficult to measure because it "is often confounded by factors out with the control of researchers" and it's hard to do a randomised control trial. Why is that concern suddenly worth mentioning now and not in the previous section about helmet effectiveness? Are they about to be unable to find an effect?
Next there's a citation of a 2008 review of earlier North American studies. The date limitation back in the first section means that the earlier studies could have been in the TRL 2009 review already, but it's not mentioned whether they were. There are a couple of other North American studies cited where it does mention that there may be alternative explanations but doesn't say what. There's a similar thing with a 2010 Australian review including 1994 data. That early date makes me realise that there's not been any mention yet of the years when the various places examined legislated. Checking Wikipedia, Australia legislated in 1989, which would make me doubt research published 2008-2014 is going to tell us much about the difference legislation makes. Canada legislation has changed over the years involved, but apparently the effect is "minimal".
The summary of legislation on injuries concludes rather weakly "there is no evidence of an increase in injuries following...legislation" but "legislation is likely to result in a reduction in reported
head related trauma, particularly for children" (my emphasis). Neither conclusion cites anything. So yes, it appears the grumbling about the difficulty was because this section is null.
Whether legislation reduces amount of cycling is the focus of the next section. Again, there's lots of grumbling about how difficult it is to distinguish the effect of legislation from other things. Then the report breaks its time limit and cites a load of 1990s Australian studies alongside more recent ones, arguing both ways, including an unattributed claim it calls "speculative". There's studies from Canada cited as concluding "no evidence" and the USA of a "modest but statistically significant 4-5% reduction".
The report tries to weigh up health costs and benefits. Bizarrely, from nowhere, it wheels out a 2010 spin on the old "inhaling air pollution" claim to take the edge of the health benefits of cycling, ignoring the recent 2014 London study, then an unsupported repeat of the old "should a person stop cycling, that exercise may be offset by participation in alternative physical activity" claim. Then it finishes by saying that the net benefit found by a 2012 formula changes depending on the measurements we put in. Well, duh. So apparently there's no conclusion here on whether reducing cycling outweighs helmet benefit.
There's a short section on attitudes to cycling, which is strange given the earlier claim that this report was going to look only at primary outcomes. We'll see where that goes, if anywhere.Summaries and scrutiny
Then there's a summary of the effect, which seems to say Jersey is a different cycling population to Australia, Canada or the USA (why wasn't it noted that Jersey isn't the USA, Israel or the UAE when we were looking at child injuries???) and finishes with the very weak "Assuming only a temporary modest reduction in child cycle participation, it is reasonable to assume that mandatory cycle helmet legislation has no long-term effect on public health".
The penultimate section is from a scrutiny panel meeting. It has some graphs claiming that 50% of adult Jersey cyclists and 86% of child cyclists wore helmets by 2013, up from around 30% adults in 2010. There's not a lot of other useful statistics in there.
Finally, the summary and conclusions section. It puts forwards three questions and my opinion is that the answers given are mostly unsupported by the evidence as described here. The full evidence may support it and it's just that this report has not reported the key facts - checking that would require a lot more work, plus some guessing at the links TRL saw between the answers and the earlier evidence, which are generally not written down.
There's also a mention on "other considerations" which seems like new information (more hasty drafting?), such as the negative effect on cycle touring (is anyone here more likely to go to Jersey now?). There's a recommendation that Jersey should make other cycle safety improvements - surely that will make it difficult to measure the effects of the helmet legislation in the way the authors were complaining about earlier? - followed by a recommendation to monitor and evaluate the effects that they've just aliased.
The 2009 TRL report gets a brief mention in the summary but none of its findings seem to have been used at all - anyone remember what they were? Asphyxiation did not get mentioned again and the section on attitudes did not seem to go anywhere.Other reviews
Now I'm going to read http://www.ctc.org.uk/news/helmet-law-1 ... and-jersey
andhttp://www.theguardian.com/environment/ ... t-evidence
and see what I missed, or if I agree with it.
Chris Peck on CTC calls it "hastily compiled", attacks the premise that government is requiring cyclists to wear helmets rather than improving the government-provided infrastructure that puts them in dangerous situations, then mentions the 2012 formula that TRL dismissed in a daft manner and highlights the mostly-ignored tourism harm.
Peter Walker on the Guardian has interviewed a couple of people involved. Some of them don't seem to believe the TRL report much. He does note the unclear links between cited evidence and conclusions that I saw, plus some information from other evidence to Jersey's scrutiny office about evidence that TRL did mention.
He links to an interesting analysis http://beyondthekerb.wordpress.com/2014 ... e-unicorn/
which makes a fairly major point that I missed completely. It's based on the Jersey hospital data mentioned by TRL but an aspect I don't think was mentioned, and the helmet-wearing data that was in the scrutiny report: already 84% of children there are wearing helmets while no under-14s were seriously injured on bikes. So the legislation logically can have little effect at best and cannot deliver a measurable benefit because the number of KSIs is already zero. It's pure cost to Jersey.
Then there's background on the legislator pushing for this law, who just happens to be UK chair of Headway, plus another take on why helmet compulsion seems far less worthwhile for the island than infrastructure improvements.Conclusion
So to conclude, this seems like a very confused and ambiguous report where statistics seem rather abused to support some answers which it's not clear that they do. I'm going to be very sceptical of TRL reports in the future.