Paul the Psychic Octopus: A watery lesson in understanding clinical evidence

So the World Cup justifiably goes to Spain and it seems that Paul, the now world famous psychic octopus, predicted the results.  In fact Paul demonstrated a seemingly legendary clairvoyant ability.  Wikipedia informs us that he predicted the outcome of all of Germany’s games and the final with unerring accuracy. You may not have realised it (which probably reflects that you are not as sad as me) but Paul offers an excellent lesson about a problem that lurks around the rock pools of clinical evidence. That problem is publication bias.

You see, while we heard about Paul’s outstanding achievements we didn’t hear much about the rest of the budding psychic menagerie. What about Desmond the psychic donkey, Ophelia the psychic Ocelot or that well known psychic Chinchilla: Dave? Paul represents the odd one out, the outlier, the less frequent but ultimately predictable clustering of seemingly unlikely events.

Publication bias is an obvious problem for clinical single case studies. What usually drives the submission of a case study for publication?  It is the case that stands out, usually the patient that responded well to a fancy new approach. Essentially it is the outlier, which is why case studies are quite good for generating hypotheses but as good as useless for informing treatment choices.

But the problem doesn’t stop there. It also affects clinical trials and systematic reviews. A range of factors prevent negative findings from seeing the light of day and this effect has been quantified in a Cochrane review. Authors don’t submit them, editors and reviewers don’t accept them (and where they do I’ll bet lots of clinicians avoid reading them). At the top of our evidence based tree we have systematic reviews and meta-analyses of controlled trials. To obtain an accurate pooled estimate of a treatment’s effect size requires the full spectrum of the data. The play of chance affects the results of every clinical trial (Those clever folks at Bandolier have a great information sheet on this here: click the files titled “size” and “Bandolier bias guide”). Just as each individual patient in a clinical trial represents a data point in a varied data set that contributes to the overall estimate, so does each trial included in a meta-analysis. If we are missing some of our sample and that loss is not at random then our estimate will be skewed.

There are methods for investigating possible publication bias in meta-analyses such as funnel plots and  a selection of statistical tests. They are not perfect, and some would argue far from it. On reviewing our Cochrane protocol one (eminent) reviewer quipped “don’t spoil a good review by using funnel plots to look for publication bias, because they don’t, and the overwhelming evidence is that they never could.” But even where we can’t quantify it, publication bias may still be a problem.

Bandolier have argued that publication bias may be less of an issue where we have a wealth of data from large, well performed trials since these trials’ estimates should boast reasonable accuracy. But in reviews of physiotherapy research, where trials are frequently small and their results more variable, even where studies have been performed with rigour the problem is likely to be there and we might expect effect estimates to be exaggerated.

So there we are, the Uri Geller of cephalopod molluscs has shone a light on evidence based practice. It might be a useful career change for him. That victory tour of Spain he had planned may prove risky. I seem to remember enjoying a lovely grilled octopus salad in Valencia a few years ago.

Reference

ResearchBlogging.org Hopewell S, Loudon K, Clarke MJ, Oxman AD, & Dickersin K (2009). Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane database of systematic reviews (Online) (1) PMID: 19160345

Comments

  1. Neil O'Connell says

    Oh, and for more info this is a great blog on the same issue from Ben Goldacre with a lean towards pharma research. http://www.badscience.net/2008/03/beau-funnel/