What happens when systematic reviews tell us different things?

Conventional wisdom tells us that when we want an answer to a clinical question, such as what is the evidence for treatment ‘X’, we should look to systematic reviews because they collate all the available evidence on that topic. Problematically though, sometimes systematic reviews on the same topic don’t all give us the same conclusions. This leaves us wondering…well, what the heck is the evidence for this treatment??

Well, some very keen researchers found this exact thing when looking at the evidence for causal risk factors for neck/upper extremity disorders in people who use computers.[1] This paper is worth having a look at and is freely available here. The authors were tipped off that there may be a problem with the reporting of the evidence for the causal risk factors when two previous systematic reviews[2,3] made conclusions that were a bit liberal (eg, ‘computer-related risk factors demonstrate a consistent relationship with musculoskeletal disorders’ and ‘upper extremity disorders are exposure-related in men and women using computers with adequate scientific evidence available to prevent these disorders’), despite nearly all conclusions being based on cross-sectional studies. As you may know, cross-sectional study designs only allow us to say that factor ‘x’ (eg, pain) tends to be around and varies in similar patterns as factor ‘y’ (eg, awkward computer posture). They do not allow us to say that factor ‘y’ causes factor ‘x’. So these researchers were concerned and rightfully so. Thus they decided to summarize all the systematic reviews published on causal risk factors or intervention studies in relation to neck/upper extremity disorders and computer users and compare the conclusions made.

The results were fascinating. There were seven reviews looking at causal risk factors and neck/upper extremity disorders and all had remarkably different conclusions. The conclusions ranged from ‘consistent evidence’ to ‘extensively researched and generally well-established’ to ‘moderate evidence’ to ‘limited evidence’ for the association between computer risk factors and the occurrence of painful disorders. Granted, each included review had slightly different inclusion criteria, different quality/bias assessment criteria, and there was not a huge amount of overlap of included studies between the reviews. This might convince me that the comparison isn’t that valid…except the six reviews evaluating interventions were decently consistent in concluding that there is limited evidence for effectiveness of specific interventions despite these same limitations. On a good note, it seems that we can trust the systematic reviews on more specific conditions – ie, carpal tunnel syndrome. Reviews consistently concluded that there was insufficient evidence to support the relationship between computer use and occurrence of CTS.

Perhaps we are now at the point where if we need solid evidence we only look for systematic reviews of systematic reviews. I personally find this a bit disconcerting as for some conditions this could take ages before we could ‘trust’ the evidence. In actuality, I don’t think the situation is so bleak that we can only trust reviews of reviews. I’d argue that the quality of systematic reviews is improving over time (along with quality of individual studies), allowing more trust to be put in the newer ones as they tend to provide more conservative estimates and a greater discussion of knowledge gaps in the literature. Having said that, if an overview of systematic reviews is there – absolutely use it. And hopefully we will not be left with…what the heck is the evidence?!

About Tasha

Tasha Stanton post doc bodyinmindTasha Stanton is a postdoctoral research fellow working with the Body in Mind Research Group both in Adelaide (at University of South Australia) and in Sydney (at Neuroscience Research Australia). Tash has done a bit of hopping around in her career, from studying physio in her undergrad, to spinal biomechanics in her Master’s, to clinical epidemiology in her PhD, and now to clinical neuroscience in her postdoc. Amazingly, there has been a common thread through all this hopping and that common thread is pain. What is pain? Why do we have it? And why doesn’t it go away?  Tasha got herself one of the very competitive Canadian IHR post-doctoral fellowships and is establishing her own line of very interesting investigations.  Her research interests lie in understanding the neuroscience behind pain and its clinical implications. She also really likes nifty experiments that may have no clinical value yet, but whose coolness factor tops the charts. Last, Tash is a bit mad about running, enjoying a good red with friends and organizing theme parties. Tasha, aka Stanton Deliver, was the all round best performer at the Inaugural BiM Table Tennis Comp.

Here is Tasha talking more about what she does and a link to her published research.
We have put BiM author’s downloadable PDFs here.

References:

ResearchBlogging.org

[1] Anderson JH, Fallentin N, Thomsen JF, Mikkelsen S (2011) Risk factors for neck and upper extremity disorders among computer users and the effect of interventions: An overview of systematic reviews. PLoS ONE 6(5):e19691.

[2] Tittiranonda P, Burastero S, & Rempel D (1999). Risk factors for musculoskeletal disorders among computer users. Occupational medicine (Philadelphia, Pa.), 14 (1) PMID: 9950008

[3] Bergqvist U, Wolgast E, Nilsson B, & Voss M (1995). Musculoskeletal disorders among visual display terminal workers: individual, ergonomic, and work organizational factors. Ergonomics, 38 (4), 763-76 PMID: 7729403