Steady rider wrote:There may be many confounders but they are not forced to have much effect.
The foundation for any sort of well-done case/control is that the two groups should be near-as-dammit identical aside from your intervention (and ideally they should be randomised as to who gets the intervention and who is the control, and neither party should know which is which). This foundation is undermined by various aspects of the nature of cycle helmets and how they are used. While some confounders can be corrected for, that your main basis of testing is on shifting sands to start with is very hard to get around.
Steady rider wrote:Weaknesses with a meta-analysis approach to assessing cycle helmets <snip>
If you can't trust one paper then aggregating 100 of those isn't necessarily going to help, strikes me as the fundamental weakness there. Good science should be reproducible, but even if, say, Olivier's hand-picked cherry selection all think helmets do positive good their assessment as to how much is all over the place.
Population work should fare better than hospital admissions studies because you're looking at the totality of riders so you should be able to spot genuine trends, but you have different problems in there, often that you end up looking at "average cyclists" who don't really exist on the ground. In looking at everyone you blur the very real boundaries between different groups. This is still useful for informing public policy, but not so much for answering the question, "would I personally be better off in a hat, for what particular values of 'better off'".