What happens when systematic reviews tell us different things?

Conventional wisdom tells us that when we want an answer to a clinical question, such as what is the evidence for treatment ‘X’, we should look to systematic reviews because they collate all the available evidence on that topic. Problematically though, sometimes systematic reviews on the same topic don’t all give us the same conclusions. This leaves us wondering…well, what the heck is the evidence for this treatment??

Well, some very keen researchers found this exact thing when looking at the evidence for causal risk factors for neck/upper extremity disorders in people who use computers.[1] This paper is worth having a look at and is freely available here. The authors were tipped off that there may be a problem with the reporting of the evidence for the causal risk factors when two previous systematic reviews[2,3] made conclusions that were a bit liberal (eg, ‘computer-related risk factors demonstrate a consistent relationship with musculoskeletal disorders’ and ‘upper extremity disorders are exposure-related in men and women using computers with adequate scientific evidence available to prevent these disorders’), despite nearly all conclusions being based on cross-sectional studies. As you may know, cross-sectional study designs only allow us to say that factor ‘x’ (eg, pain) tends to be around and varies in similar patterns as factor ‘y’ (eg, awkward computer posture). They do not allow us to say that factor ‘y’ causes factor ‘x’. So these researchers were concerned and rightfully so. Thus they decided to summarize all the systematic reviews published on causal risk factors or intervention studies in relation to neck/upper extremity disorders and computer users and compare the conclusions made.

The results were fascinating. There were seven reviews looking at causal risk factors and neck/upper extremity disorders and all had remarkably different conclusions. The conclusions ranged from ‘consistent evidence’ to ‘extensively researched and generally well-established’ to ‘moderate evidence’ to ‘limited evidence’ for the association between computer risk factors and the occurrence of painful disorders. Granted, each included review had slightly different inclusion criteria, different quality/bias assessment criteria, and there was not a huge amount of overlap of included studies between the reviews. This might convince me that the comparison isn’t that valid…except the six reviews evaluating interventions were decently consistent in concluding that there is limited evidence for effectiveness of specific interventions despite these same limitations. On a good note, it seems that we can trust the systematic reviews on more specific conditions – ie, carpal tunnel syndrome. Reviews consistently concluded that there was insufficient evidence to support the relationship between computer use and occurrence of CTS.

Perhaps we are now at the point where if we need solid evidence we only look for systematic reviews of systematic reviews. I personally find this a bit disconcerting as for some conditions this could take ages before we could ‘trust’ the evidence. In actuality, I don’t think the situation is so bleak that we can only trust reviews of reviews. I’d argue that the quality of systematic reviews is improving over time (along with quality of individual studies), allowing more trust to be put in the newer ones as they tend to provide more conservative estimates and a greater discussion of knowledge gaps in the literature. Having said that, if an overview of systematic reviews is there – absolutely use it. And hopefully we will not be left with…what the heck is the evidence?!

About Tasha

Tasha Stanton post doc bodyinmindTasha Stanton is a postdoctoral research fellow working with the Body in Mind Research Group both in Adelaide (at University of South Australia) and in Sydney (at Neuroscience Research Australia). Tash has done a bit of hopping around in her career, from studying physio in her undergrad, to spinal biomechanics in her Master’s, to clinical epidemiology in her PhD, and now to clinical neuroscience in her postdoc. Amazingly, there has been a common thread through all this hopping and that common thread is pain. What is pain? Why do we have it? And why doesn’t it go away?  Tasha got herself one of the very competitive Canadian IHR post-doctoral fellowships and is establishing her own line of very interesting investigations.  Her research interests lie in understanding the neuroscience behind pain and its clinical implications. She also really likes nifty experiments that may have no clinical value yet, but whose coolness factor tops the charts. Last, Tash is a bit mad about running, enjoying a good red with friends and organizing theme parties. Tasha, aka Stanton Deliver, was the all round best performer at the Inaugural BiM Table Tennis Comp.

Here is Tasha talking more about what she does and a link to her published research.
We have put BiM author’s downloadable PDFs here.



[1] Anderson JH, Fallentin N, Thomsen JF, Mikkelsen S (2011) Risk factors for neck and upper extremity disorders among computer users and the effect of interventions: An overview of systematic reviews. PLoS ONE 6(5):e19691.

[2] Tittiranonda P, Burastero S, & Rempel D (1999). Risk factors for musculoskeletal disorders among computer users. Occupational medicine (Philadelphia, Pa.), 14 (1) PMID: 9950008

[3] Bergqvist U, Wolgast E, Nilsson B, & Voss M (1995). Musculoskeletal disorders among visual display terminal workers: individual, ergonomic, and work organizational factors. Ergonomics, 38 (4), 763-76 PMID: 7729403



  1. ” What happens when systematic reviews tell us different things? ”
    I think largely depends on who’s perspective.

    Care funders may select the interpreatation that has the greatest cost benefit.

    Care providers may select the option which maximises resource utilisation under the guise of efficiency.

    And dare I suggest that clinician’s will probably ignore until something definitive emerges that will enhance patient care.

    Like many things in life we can draw comfort from the illusion of security but must temper with the reality of uncertainty. Some are more tolerant / comfortable with uncertainty than others.

    Thanks to all the contributors for the great resources


  2. Hi Kukuh
    Thanks for contributing to our blog on this – we would love it if you could do a little blog post for us on the article to which you refer – would you consider it?
    If so, Heidi will liaise re guidelines etc.
    Regardless, thanks a million, Lorimer

  3. Tasha Stanton says

    Thanks Neil and Julia! Great links btw Neil – you are a wealth of knowledge!!

    Julia, regarding your question about ‘systematic reviews of systematic reviews’, that is not an ignorant question at all! I didn’t explain it too well. I was kind of referring to two things. First, I was referencing the way Cochrane Collaboration seems to be going by creating an overview of a condition or a treatment. They are designed to compile evidence from multiple systematic reviews of interventions into one accessible and usable document. Usually they may involve a summary of one or two interventions for a specific condition. In this way, the overview links together and collates all the different information that is out there in systematic review form. Having said that and second on my list, there are also true systematic reviews of systematic reviews (that’s in the title baby!) that look to compare the conclusions of various reviews based on the different inclusion criteria used.

    I take your point that, if doing a SR of SRs, why wouldn’t you just get all the primary studies from these systematic reviews and critically appraise them using standardised and validated means? Absolutely, but I reckon in some conditions, this may be more difficult (and time-consuming) and may also be limited by the fact that you are specifying a discrete set of inclusion criteria. I guess it is a slightly different purpose in my mind. Updating a systematic review vs providing a comprehensive summary of the reviews out there.

  4. Kukuh Noertjojo says

    Thank you for a great commentary. I found out similar things when I was looking into the evidence on the topic in relation to claims.



  5. Hey Tash, I agree with Neil – great post. First up I have to admit my ignorance of the methodology of “systematic reviews of systematic reviews”, but I dont get why you just wouldnt appraise the primary studies using consistent and rigorous criteria and methods. So, for example, if the systematic review was conducted from scratch, and inception cohort studies were specified as an inclusion criterion for study design, then the researchers would leap over the thorny issue of attributing causation in cross-sectional studies (which as you point out is just plain wrong – no offence intended to anyone, it’s just the way it is).


  6. Neil O'Connell says

    Great post Tash and too true. All systematic reviews are not created equal and should be approached critically like everything else.

    A good place for novice folk to start if reading a systematic review is using things like the AMSTAR quality assessment tool http://www.biomedcentral.com/1471-2288/7/10, or the CASP tools approved by the UK NHS : http://www.casp-uk.net/ (and click on the systematic review checklist). Applying these as you run through a review will help you rate how good it is.