A sparkling, glittery threat to evidence based practice

Here at Brunel I run an MSc module on evidence based practice. In the first session of the module I run an honesty test. Here it is (answer it yourself and, well, be honest).

“What sections of a research paper do you routinely read. Honestly.”

Almost without exception the whole group will admit to reading the abstract and small proportion will make a claim to reading the introduction or discussion. But on closer scrutiny we manage to whittle that down further. A few only read the conclusion of the discussion and the rest only the conclusion of the abstract. A whole paper boiled down to one or two sentences. Almost nobody claims with any conviction that they read the methods or results in detail.  Many writers have highlighted the problems with this selective approach to viewing the literature. A cynic’s view of the sections of a paper goes something like this:

Introduction: In which the authors seek to justify the importance of their research by cherry picking any supporting evidence and ignoring the rest.

Discussion: In which the authors seek to interpret the results in a way that does not challenge their pre-existing worldview by jumping through a selection logistical hoops.

Conclusion: In which the authors try to give you a take home message consistent with their pre-existing worldview.

Methods: One of the important bits.

Results: The other important bit (often with parts strangely missing).

This view is over the top but the problem with research papers is that they are a human endeavour, and impartiality is not one of our greater virtues. Richard Feynmann’s classic quote comes to mind:  “The first principle is that you must not fool yourself, and you are the easiest person to fool.” This has become much more apparent to me since I became involved in conducting Cochrane reviews. The scale and commonality of erroneous, incomplete or selective reporting came as a big surprise.

Two new papers in the musculoskeletal field have just been published that speak loudly to this problem.  The first is a fantastic cautionary tale. A French research group led by Sylvain Mathieu reviewed all RCTs in osteoarthritis, rheumatoid arthritis and the spondylarthropathies published between 2006 to 2008. They went looking for the incidence of “misleading abstract conclusions”. Specifically they looked for a selection of naughties: not reporting the results of the primary outcome, basing conclusions on secondary outcomes or the results of a sub-group analysis, presenting conclusions that are at odds with the data, claiming equivalence of efficacy in a trial not designed to test for it, and finally not considering the risk-benefit trade-off.  Like a game of clinical trial Bullshit Bingo. They found evidence of misleading conclusions in 23% of reports. The only predictor of misleading conclusions was genuinely negative results. In trials with negative results the rate of misleading conclusions was, brace yourself, 45%. “WHOAH!”, I hear you exclaim.

It is well known that negative results present a unique challenge to science. Editors don’t like to publish them and researchers don’t like to submit them for publication but it seems from this evidence that they also don’t like to accept them in the first place. This represents a unique failure of the scientific process. Why ask the question if you are only prepared to hear one answer? Clinical researchers need to remember that they are not in the business of trying to validate clinical practice, they are in the business of trying to test it. The two things are not the same.

Another recent review led by Sidney Rubinstein has looked at whether methodological quality is improving in clinical trials of spinal manual therapy by reviewing trials over 5 year periods from the 1970’s to 2011. Happily they do find a trend towards improving quality but still most of the risk of bias criteria were met by less than half of included trials published in the last decade.

gold turdWhat does that mean for those of us trying to make sense of research and help it guide our practice? When you see a nice positive conclusion you need to look closer to be sure it is not being embellished. Abstract conclusions cannot be taken at face value, even if they are appealing. There is a florid expression in East London: “You can’t polish a turd…. but you can roll it in glitter”.  Having the skills to detect said glitter in clinical trials is a must for anyone interested in evidence based practice.

If you can scrutinise a paper and make sense of the methods and data, then you can judge it and make an informed decision on how it should (or shouldn’t) affect your practice. Its not that hard to learn and its actually fun when you get into it. Embrace your inner geek. A good place to start for clinical trials would be to read the section of the Cochrane Handbook on assessing risk of bias – it’s free to read here (check out chapter 8).

But we shouldn’t fall into the trap of thinking that since research is flawed, we’re better off relying on clinical experience. These two studies are a good example of how science, unlike opinion, is self correcting. If these studies tell us anything it’s that those poor, neglected methods and results sections is where you’ll find the real gold.

About Neil

Neil OConnellAs well as writing for Body in Mind, Neil O’Connell is a researcher in the Centre for Research in Rehabilitation, Brunel University, West London, UK. He divides his time between research and training new physiotherapists and previously worked extensively as a musculoskeletal physiotherapist. He also tweets! @NeilOConnell

He is currently fighting his way through a PhD investigating chronic low back pain and cortically directed treatment approaches. He is particularly interested in low back pain, pain generally and the rigorous testing of treatments. Link to Neil’s published research here. Downloadable PDFs here.

References

Mathieu, S., Giraudeau, B., Soubrier, M., & Ravaud, P. (2012). Misleading abstract conclusions in randomized controlled trials in rheumatology: Comparison of the abstract conclusions and the results section Joint Bone Spine, 79 (3), 262-267 DOI: 10.1016/j.jbspin.2011.05.008

Rubinstein, S., Terwee, C., de Boer, M., & van Tulder, M. (2012). Is the methodological quality of trials on spinal manipulative therapy for low-back pain improving? International Journal of Osteopathic Medicine DOI: 10.1016/j.ijosm.2012.02.001

Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Available from www.cochrane-handbook.org.

Comments

  1. Eoin O' Conaire says

    Excellent piece Neil!

    I am currently getting started on a systematic review and it is shocking how many papers that at first “read” seemed great (and I quoted them frequently) but then when scored using a scoring tool are uncovered as being of very low methodological quality.

    And yes, the Cochrane Handbook is a fantastic “geeks best friend”!

    Regards,

    Eoin

  2. Great article Neil apposite and pithily to the point, I don’t think the analogy will leave my mind for some time particularly after Steve’s verbal picture. I guess we all polish away at our respective turds in an effort to resolve the inner conflict between what we perceive and what is actually there. I know the article is aimed at research but in the end I dare suggest (gently) that might be the microscopic scale of a much larger professional macroscopic “polishing”?

    But none of us do that, right?

    regards

    ANdy

    Steve Kamper Reply:

    Important point Andy, if somewhat uncomfortable for all of us.

  3. Great article Neil, thanks

  4. ian stevens says

    Paul, excellent ! Obviously the glitter finish is optional but one wouldn’t need to repeat the experiment too closely to see how this could be achieved. I think any further discussion on the issue would lead to areas of behaviour that are probably best left alone.

  5. Many moons ago, my journey towards the EBM light began with a serious pain problem (iliotibial band syndrome) while I was training to be a massage therapist (an alarmingly evidence-free curriculum, alas). I had the important inspiration that, when the going gets serious, the serious go to the literature. But I knew nothing about the literature. Indeed, all I really knew about science I’d learned from Carl Sagan. I’d just finished reading Demon-Haunted World, which I was still fuming over, because I still wanted to believe in things back then. (It wasn’t until my second read a year later that I realized it was possibly THE GREATEST THING EVER WRITTEN.)

    So that was where I was at as a “researcher” when I started reading abstracts. And of course abstracts were all I read. I took them all at face value. It was science. I was impressed with myself just for using PubMed. That was enough. It was enough for at least a year. It was so easy! All I had to do was look shit up, and if an abstract had anything in it that sounded right, yahtzee! Make a footnote, instant credibility. It was like, um, magic.

    Eventually I actually read a paper. And it didn’t really square with the abstract.

    Uh oh.

    I remember that sinking feeling very well. “Oh my,” I thought. “I’m going to have to have another look at an awful lot of papers.” Checking the literature got a lot less sexy all at once. Fifteen years later, I am now cynically surprised that the rate of misleading conclusions in abstracts is ONLY a piddling 45%. I would have guessed about 70%!

    Andy Reply:

    70% would put you in good company methinks. What ever took you to a link about raising turd polishing to an art form?

    regards
    ANdy

    Paul Ingraham Reply:

    Just lucky! I happened to see that episode of MythBusters a few months ago. So of course I now think of it every time turd polishing comes up.

  6. Oddly enough, you CAN polish a turd. One of the more peculiar things the MythBusters have tested: http://t.co/RrSkDx2z

    Neil O'Connell Reply:

    Fantastic. Something for me to do on the weekends. I’ve needed a new hobby.

  7. Alex Chisholm says

    Thank you. Great points. In addition, on occasion, the Cochrane Database has also had a few issues in its own conclusions (Is quality control of Cochrane reviews in controversial areas sufficient? Bjordal JM, Lopes-Martins RA, Klovning A. J Altern Complement Med. 2006 Mar;12(2):181-3.) I don’t take the cochrane database as gospel either!

    Neil O'Connell Reply:

    Good point Alex. You certainly shouldn’t take the reports of Cochrane reviews as gospel, though methodologically they tend to be more robust. Often the issue is in interpretation, and I can think of a number of examples where I think a positive message has been dragged kicking and screaming out of equivocal data in Cochrane reviews.

  8. Great post, Neil!

    Some authors skip the results part altogether, if the method section doesn’t hold up.

  9. Annie Tucker says

    Thank you, thank you, thank you!

  10. what really needs to be taught in school? is it all about publish or perish? why not just spend all your time on the abstract???? that really is a sad statistic but somehow in our world of so much literature coming out at such a fast rate and you need to read it to critique it we need a way to cut down on the amount of literature we are faced with reading to keep up to date but this is not a way to do it. I’ll have to read the info you posted as i must say as most other truthful souls I certainly start with the abstract and if it interests me then I read the article

  11. As an editor of an Elsevier published, medLine indexed, journal (Jnl. Bodywork & Movement Therapies) I found this piece fascinating. I’ve long suspected that readers skip the methods section (and others) and focus on the abstract and possibly conclusions. This is now confirmed. Also of interest were the comments on publication of negative results – which is something we are happy to do and to encourage. Altogether a very useful piece of writing. Thankyou

  12. Steve Kamper says

    Great piece Neil. The issues of what gets reported, what gets submitted, and what gets accepted are really difficult but hopefully increasing awareness are ways of improving the situation. Along with protocol publication and open review process. Regardless, I agree that just reading research articles is not enough, without knowledge about method the potential for being misled or misinterpreting is great.
    Maybe your MSc course name needs a byline. Evidence Based Practice: Picking off the glitter?

    Neil O'Connell Reply:

    Cheers Steve,

    What an image! The byline I had run on my opening lecture was “Evidence Based Practice: Because you’re a liar, so am I, and so is everyone we’ll ever meet”.

    But yours is punchier. So I’ll use it.