Back pain: It ain’t what you do it’s ….?

Every now and then I stumble across a paper that evokes the reaction “I wish I’d though of that”. Such a paper recently turned up in the journal Rheumatology by Majid Artus and his colleagues at Keele University. They performed a systematic review that aimed to assess not the effectiveness of interventions but instead the overall pattern of symptom development over time of back pain sufferers who take part in clinical trials and how it might vary between different types of intervention. They included 126 controlled trials of acute or chronic non-specific back pain.

This figure shows neatly what they found. It plots the responses from individual trials and what you see is a common pattern – people seem to get better and the effect size is not trivial. “Fabulous!” you might conclude, “treatment is good!“

Back Pain Research, Artus Rheumatology 2010

Overall responses (VAS for pain) up to 52-week follow-up in each treatment arm of included trials. Each line represents a response line within each trial arm. Red: index treatment arm; Blue: active treatment arm; Green: usual care/waiting list/placebo arms. ____: pharmacological treatment; - - - -: non-pharmacological treatment; ……: mixed/other. Fig 2 Rheumatology. 2010 Dec;49(12):2346-56. By permission of Oxford University Press. This figure may not be reproduced for any other purpose without permission

Perhaps, but look closely. The blue lines are studies in which the treatment arm from the active treatment group was selected, the red where the comparator treatment group was selected and the green where usual care, waiting list controls or placebo group was selected. No glaring differences jump out. They then performed a meta-analysis subgrouping groups by the type of intervention received. This is always tricky as which treatments you lump into a group is open to debate. Nonetheless they found that variation in treatment response did not seem be explained by different types of treatment. The authors suggest that this may mean that factors that are not specific to the type of treatment given may account for the improvements seen. In other words Fun Boy Three and Bananarama (thanks to Melvin Oliver and James Young) may have been right when they sang “It ain’t what you do it’s the way that you do it, and that’s what gets results”

But perhaps there may be a bigger story in here. It is interesting that the placebo treatment/waiting list control/no treatment group didn’t differ from the treatment groups. In some ways it is a shame that the placebo group couldn’t be separated from the no-treatment and usual care groups in the analysis but as the authors note there simply weren’t enough no-treatment/waiting list groups to analyse reliably. Dr Artus kindly shared his data (always the sign of a real scientist) and it is interesting that the existing data from waiting list control and usual care arms does not hint at a smaller effect size.

If this finding is supported by future studies it might suggest that we can’t even claim victory through the non-specific effects of our interventions such as care, attention and placebo. People enrolled in trials for back pain may improve whatever you do. This is probably explained by the fact that patients enrol in a trial when their pain is at its worst which raises the mucky spectre of regression to the mean and the beautiful phenomenon of natural recovery.

We expect to see natural recovery play a big role in acute back pain trials but in this review 48% of the included data was from chronic back pain trials. There is a (seemingly plausible) argument I have often heard that in chronic conditions any improvement seen after a treatment (or in a trial) must be the result of the treatment since the patient has had their problem for so long:  “why would it get better now?”. It is common for the authors of trials that compare 2 treatments for back pain and find no difference to point to the fact that both groups improved from baseline as evidence that both treatments are equally (and meaningfully) efficacious. If the results of this review are maintained with future data then this is a genuinely shaky position (unless a no-treatment group is included in the trial for a direct comparison) and what we may well be observing is simply a statistical phenomenon.

In that case the song lyrics might go something like “It ain’t what you do and it ain’t that you do it”. And that’s a bad result.

About Neil

Neil O’Connell is a researcher in the Centre for Research in Rehabilitation, Brunel University, West London, UK. He divides his time between research and training new physiotherapists and previously worked extensively as a musculoskeletal physiotherapist. He also tweets! @NeilOConnell

Neil is currently fighting his way through a PhD investigating chronic low back pain and cortically directed treatment approaches. He is particularly interested in low back pain, pain generally and the rigorous testing of treatments. He also tends to get all geeky over controlled trials.


Artus M, van der Windt DA, Jordan KP, & Hay EM (2010). Low back pain symptoms show a similar pattern of improvement following a wide range of primary care treatments: a systematic review of randomized clinical trials. Rheumatology (Oxford, England), 49 (12), 2346-56 PMID: 20713495


  1. Hello from Athens Greece Neil,
    happy, prosperous and creative 2011 to everyone on BiM.
    I thought you might find this article interesting, you guys have probably already read it but just in case.
    As a reflexologist I find your posts helpfull, interesting and challenging, and I blog them frequently. I wish (Santa) for next year you also examine – question research papers about other CAM modalities besides acupuncture, …reflexology would be great!

    Spiros, Athnes

    Νούσων φύσιες ἰητροί. Ἀνευρίσκει ἡ φύσις αὐτὴ ἑωυτῇ τὰς ἐφόδους, οὐκ ἐκ διανοίης, («Επιδημιών VI» 5,1)
    Nature is the only doctor of disease. Nature finds the means of treatment without thinking, like blinking or the movement of the tongue and other similar things.

    “Placebo may not rely on deception”
    “Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome”

    Conclusion: Placebos administered without deception may be an effective treatment for IBS. Further research is warranted in IBS, and perhaps other conditions, to elucidate whether physicians can benefit patients using placebos consistent with informed consent.

    Also a question, are there brain cells we will never use?
    8th paragraph
    Likewise, we are born with billions of brain cells we’ll never use, and many if not most of them can be lost or diseased before a person experiences undeniable cognitive deficits.

  2. David Colquhoun says:

    @Steve Bathe
    I don’t see the purpose of words like “biomedical model” and “biopsychosocial model”, specially when they are used as though they formed mutually exclusive categories.
    To paraphrase Richard Dawkins, there are treatments that work and treatments that don’t work. Packaging then into different “models” is just playing with long words. The efficacy of CBT needs to be tested just as the efficacy of a drug needs to be tested.

    You say

    “reductionist nature of research and the confounding variables of the human being as both subject and experimenter will not provide a definitive answer to the problem of pain”

    I expect that is what would have been said about tuberculosis or smallpox in times past. I don’t know where your confident judgment comes from. I have no idea whether better solutions to pain (or cancer or, …) will take 20 years or 200 years. They might indeed never be found but it seems a bit early to give up

  3. Steve Bathe says:

    I enjoy working in the biopsychosocial model. I think that it does help to frame peoples experience of chronic pain. And that it offers useful, pragmatic treatments to improve their lifes; graded exposure, ACT, CBT skills.
    I agree that in real terms research has been going on over a short timescale but think that the reductionist nature of research and the confounding variables of the human being as both subject and experimenter will not provide a definitive answer to the problem of pain. That said it is by those individual research pieces being done that allows the systemic review referred to at the start of this thread to be undertaken. That is truly enlightening but not in a way that any of the authors of the original articles would have envisaged as they set out on as they tested their hypotheses. And for me that leads to the clinical usefullness of Neil’s quote
    “Not very catchy huh? But honest, evidence based and humble”.
    It is not that I have given up on research but I am very wary of it as it can offer false hope of useful, passive clinical intervention.

  4. David Colquhoun says:

    &Steve Bathe

    You say “as we emerge from the cul-de-sac of the biomed model”

    What other model do you want? The angelic reiki model?

    I think that all you are saying is that science has not yet succeeded in explaining or helping with low back pain. The problem turns out to be difficult and serious research hasn’t been going on for every long; a drop in historical time. and far less time than research in physics. It would be silly to “give up” just because we haven’t had instant gratification. I know that isn’t much help to the clinician, but it’s true anyway.

  5. Steve Bathe says:

    Ah to stumble across the voice in the wilderness. Really refreshing, open and honest discussion. As a chronic pain physio in an MDT Pain team a lot of this thread is a joy to read. We all appear to by struggling but thats OK because it is a struggle. Our patients have their expectations of pain reduction and we steer them around towards improving quality of life measures despite their pain remaining, explained and less mysterious and frustrating but still present and real. And that can feel unsatisfactory but it is what is achievable. I also feel that we are starting to see a bigger picture as we emerge from the cul-de-sac of the biomed model. It may involve the way that chronic pain affects the structure of the brain and working with that knowledge with blur the boundary between physiotherapist and the pyschological disciplines. Currently we are looking at Bennedetti’s work on placebo with interest and also starting to follow threads on the role of the Hippocampus. Times of opportunity.
    Thanks for the thread.

  6. I know that I’m a bit late joining this thread but I was on holiday when it started and it had been rolling around my head since then.

    Some of the things that Mary said about her experience really intrigued me, particularly in relation to having achieved a rather sudden and rapid decrease in her pain intensity after receiving some treatment for her back pain, which up to then had been fairly stable. As this is one of the conditions for causality, it seems perfectly logical to attribute a sudden and rapid change in pain intensity to what had occurred immediately before.

    If only a few people achieve these effects in a clinical trial we probably miss them as the larger group who don’t get these large and rapid effects would likely washout their effects. We also tend to assume that reductions in pain intensity that are due to natural recovery or regression to the mean probably aren’t immediate. Therefore if pain intensity is assessed after the end of treatment, say at 6 weeks or so, the larger group of patients would have ‘caught up’ with the early rapid responders, which also might mask the presence of a small group.

    Anyway… I decided to have a look at some data from a trial that we conducted a couple of years ago (Hancock et al., 2007) to see if there was any possibility that the main results of no difference between “manipulation” and placebo could be hiding a group who responded early and rapidly after the intervention. The intervention in the trial was actually manual therapy, mostly mobilisation but also some manipulation if the therapist thought the patient needed it. The sample was patients with acute low back pain (less than 6 weeks) so we might expect natural history to have some effect, and maybe more than we would if we were conducting the trial on patients with more long standing pain like Mary had. We probably wouldn’t expect natural history (or regression to the mean) to produce a group of rapid responders though.

    After 3 days of treatment I classified patients into 2 groups – early rapid responders who had at least a 4-point decrease in their pain intensity, and those who didn’t. Mean pain intensity at baseline was 6 (sd=2). A greater than 4-point change was chosen as I thought that this would represent a noticeable and meaningful change in pain intensity that a patient could be likely to attribute to the therapy they had just been receiving.

    There was indeed a small group of 6 patients (out of 59) who achieved a rapid decrease of at least 4 points on a 10-point VAS following 3 days manual therapy. Then I looked at the group that received the placebo treatment (detuned ultrasound) and found a similar group of 8 patients (out of 60) who also achieved the same rapid reduction in their pain (> 4 points) over the same period. Even without any formal statistical testing the groups look pretty much exactly the same size.

    I’m very happy for anyone with a bad back to get relief from wherever they can be that physiotherapy/CBT/chiropractic/massage/accupuncure/reiki/homeopathy or reflexology after all, what does it matter to them – relief is relief. I don’t think that many people care particularly that their rapid response was more likely than not nothing more than a placebo response.

    But that is what it was.

  7. Just curious, did they have double blinded and allocation concealment as the inclusion criteria in the review?

    Neil O'Connell Reply:

    Hi Anoop,

    For all but the drug trials in here genuine double blinding is not possible as in therapy trials patients can tell what they’re getting. Since the review was not comparing within-trial between group differences a thorough quality assessment of things like allcoation concealment is not so important since it only promotes a between group bias if not done correctly.

    Also all of the common sources of bias would have worked in the favour of the active treatment arms of the trials (and exaggerated the difference between those effect sizes and the ones from the index treatment/ placebo/ no treatment groups, but we don’t really see that difference.

    So I would argue that that wouldn’t help to explain the results.

  8. ‘There is a (seemingly plausible) argument I have often heard that in chronic conditions any improvement seen after a treatment (or in a trial) must be the result of the treatment since the patient has had their problem for so long: “why would it get better now?”. ‘

    Actually, I think there are circumstances under which this argument is true. I am a chronic back pain patient, but I am also a scientist trained in understanding statistical evidence. So I agree that the patterns are both compelling and disturbing. And yet I don’t think it is really telling the truth.

    I keep careful journals, and I am fond of making graphs 😉 I have seen similar much gentle curves for my pain where I recovered by myself (or where I think the treatment wasn’t particularly effective). Normally, something goes wrong, my pain shoots up, but I expect it to gradually improve within 4-8 weeks. There is definitely a pattern of sudden jumps up (usually because I pushed myself too hard), followed by gentle returns to normal, much like in the graph you show.

    But there were a number of times where I just didn’t recover. My pain shot up and stayed up for much longer than usual (3-9 months). And then I found the “right” treatment, and experienced a step change: my pain decreased very rapidly, returning to normal withing 2-3 weeks. When that happens, I think it is reasonable to say “there has to be something about this treatment, because there is no other reason for this rapid change to happen after months of no change. In fact, I found that the most effective doctors and physical therapists whom I met are looking for this step change as well: either there is a very rapid improvement, or the treatment is not working and we should move on.

    I have experienced this enough times, over many years, that I have a very hard time believing that this is a random coincidence. But whenever I have a sudden flare-up, I never have any idea whether I am going to recover by myself or not. In fact, I have evolved a strategy which amounts to “wait at least 4 weeks before trying to look for a fix, because it will probably go down by itself. But if the situation has not changed in 4 weeks, then start looking for physical therapy or some other treatment, because it’s unlikely to improve by itself”.

    The question is, of course, why am I seeing this and the trials don’t. I’d wager for a very low NTT for all back treatments, combined with the low number of cases which don’t improve naturally. If indeed in 90% of the cases we expect the situation to resolve itself, this may well overwhelm everything else.

    And then this goes right back to what Adam Bjerre said above: it’s important to know that many people will improve naturally, and it’s important to try to separate sham from genuine treatments. But in clinical practice there will be things that make a major difference for individual patients, even if on average they may be hard to detect. And this is what I think physical therapists and other clinicians are seeing and doing. If anything, the best ones I encountered were perfectly blunt: they can form hypotheses based on my symptoms and past experience. They have no idea why sometimes things work for some people and not others. The only way to make it work is to form a hypothesis, try it out, see if there is a change. And if there is not, then hope that this will provide the data for a different hypothesis. And then just keep fingers crossed and hope that something works, with the understanding that for some people nothing does, and we currently don’t know how to predict who will recover naturally, who needs treatment, and who has something that cannot be fixed or improved.

    Neil O'Connell Reply:

    Hi Mary,

    Thanks for posting some really interesting points. I agree that there are circumstances where the intervention itself has a role in recovery (for any number of reasons) and the data from the no treatment and waiting list group is currently inadequate for drawing firm conclusions – but it is interesting.

    In terms of your point about natural recovery washing out the real efficacy of back treatments- that is a possibility but the challenge for any intervention is to offer unique added value over the natural course and other non-specific variables and back pain treatments often limp across this particular finish line. If we need that much statistical power to shoe the real effect, just how small is that added value (and is it clinically rather than statistically significant?)

    Of course anecdotal evidence has problems and natural fluctuations and recovery may still coincide with interventions – we’ll never know from this data. During a flare up, if a hypothetical patient waits 4-6 weeks to see if symptoms self-resolve before seeking care, it is possible that the time of care seeking will correspond with the time of improvement. That is not to invalidate your experiences at all, it just layers in enough uncertaintly for us to be able to draw few conclusions.

  9. Neil wrote: “A reasonable prediction if our treatments work is that they will demonstrate clear effect in trials.” That is so important that it deserves repetition and emphasis. If an intervention cannot demonstrate an obvious benefit when fairly tested, how good can it possibly be? It’s amazing how much controversy and hand-wringing there can be about effect sizes that aren’t worth writing home about … even if they’re for real.

    There are no interventions for back pain that pass the “impress me” test. If there were, they would stand out clearly on that graph!

  10. Steve Kamper says:

    What a fantastic study, thanks for putting it up Neil. We see so many systematic reviews with just a few included studies it’s great to see one with a big pile. I feeeel the power! More impressive for me though is the consistency of the shapes of the curves. Whatever a person’s view of evidence it’s hard to imagine the degree of myopia necessary to deny that this must be telling us something, the hard part of course is working out what? I don’t actually have anything to propose beyond what’s already been floated on this page but I do have a question. What would happen if you took these people and gave them another course of treatment after 20 weeks? Would they improve and level off again, or does trial-enrollment therapy only work once?

    Neil O'Connell Reply:

    Cheers Steve,

    Interesting question but the question is are the effects due to trial therapy? We’ll never know I guess but if regression to the mean and natural recovery are big players here then it shouldn’t make much difference unless you start your next bout of therapy near the peak of a flare-up.

    What the results speak loudly of to me is that there is not a therapy in the world that probably does not appear to work to the folks who deliver it. We alreay know this but sometimes its a good thing when hard data really rams it home. Like I try to convince our students, you might want it to work, it might clearly appear to work but ultimately you just can’t tell!

  11. Adam Bjerre says:

    First of all I have to be careful in my daily frustration not to “shoot the messenger” – you’re are certainly not annoying at all… You are bringing up a very disturbing, unsatisfactory but very very important issue. And the over-all message – to be very careful which “experts” to listen to and which “approach/method/technique” to use is vital. But – in my daily practice I still need an “approach” – not a single one – but a strategy that is at the same time evidence based, reliable, valid, suited for the individual and efficient in terms of restoring function (sometimes through symptom-“relief”), confidence and self-efficacy. But what if the evidence is lacking – and when do I know when it’s lacking? I know it is up to me to educate myself as it is me who is making the clinical decisions with my patients every single day.

    A pivotal point as I understand it so far is that symptomrelief or lessening in symptoms seemingly is an unreliable evaluation of any kind of thing we do or don’t do in terms of non-specific back pain. Despite the fact that most of what we do as clinicians ultimately is aimed at lessening those symptoms that is the very reason the patient is seeing us. “Once you get pain you want to get rid of it” (the first line as I recall it in “Explain Pain”). Your conclusion, perspective or blogintentions is that we should accept this downer point from a scientific viewpoint and then what? It is unsatisfactory to me as the symptom is the very thing that signals a problem somewhere in the patients system(s) – at least to the patient. You need a very very trusthworthy relationship between clinician and patient for the patient to accept this point.

    The frustrating thing for me as a practitioner is that it seems quite hard to build up a confident clinical approach, when most of the time the scientific evidence keep shooting down the efficacy in terms of pain relief and other quantifiable data. After all uncertainty is always at the heart of the clinical encounter. David Butler has touched this topic elegantly in his essay in Topical Issues in Pain 1.

    I feel I have tried to walk the steep way uphill towards a biopsychosocial paradigm, trying to integrate knowledge about neuroscience, pain psychology, behaviourism and dignity into a thorough assessment, the therapeutic education of the patient, the advice given, the active part of therapy and the passive part of therapy. At the same time trying to dismiss and forget (which can be quite hard – som kind of longterm potentiation y’know…) the bankrupt biomedical paradigm. But somehow it feels that no-one is able to suggest any clinical approach that involves using the afferent pathways anymore. Are those pathways so last year? – after all they are the only ones I can still get my hands on… I am trying though to integrate more and more motor imagery into even more acute pain states, but the evidence is lacking (/not supportive?).

    The problem for the clinician is always the challenge of applying best evidence to therapy. Mark Jones and colleagues have written two excellent papers on that challenge here ( and here ( Their key points are “the challenge of what constitutes acceptable evidence to inform evidence-based practice with the over-reliance on quantitative methodologies that risk excluding valuable qualitative evidence to support sound practice”. Another key point is “the challenge for clinicians in maintaining best practice based on evidence which is still largely not available OR is compromised by limitations to research design with respect to population homogenity, diagnostic inclusion criteria, intervention details, outcome measures, and critical appraisal tools”. In the second paper they have the brilliant quote by Ken Cox: “Scientific method focuses on one variable at a time across a hundred identical …(subjects) to extract a single, generalisable “proof”. … Clinical practice deals with a hundred variables at a time within one …(subject) … in order to optimise a mix of outcomes intended to satisfy the particular …(subject’s) current needs and desires.”
    I have the same experience as the authors when the suggest that ” a somewhat narrow conception of what consitutes evidence-based practice and what constitutes acceptable evidence is creating challenges to clinicians wanting to apply best evidence to their physiotherapy practice.”

    I will now rest my case as this has got to be one of the longest blogreplies on BiM. I apologise. Hope I didn’t lose focus or you on the way.

    Thank you again for the brilliant blogposts and insights that you provide through BiM. It’s always a pleasure to read.

    Neil O'Connell Reply:

    Phew! Thanks Adam,

    A couple of short points. I don’ think in back pain we are making decisions with a lack of evidence – there is loads of evidence (118 trials in this review, not 126 as I stated in the original blog – my error, as another example the most recent Cochrane review of manipulaticve therapy for back pain inclided 39 RCTs including 5,486 patients). What we are arguably seeing is not absence of evidence, rather clear evidence of absence. The message that throws up for practice is hard, but I still would stand by by initial suggestion to your first comment as to howe one might manage a back pain patient.

    I don’t personally subscribe to the “narrow view of EBP” position. A reasonable prediction if our treatments work is that they will demonstrate clear effect in trials. There are many different ways of running a trial but regardless the message that comes back is remarkably consistent.

  12. Not that I want to throw a whole bomb into the mix, but I will. There has been an article in, The Journal of Pain, Vol 11, No 11 (November), 2010: pp 1074-1082, “Preference, Expectation, and Satisfaction in a Clinical Trial of
    Behavioral Interventions for Acute and Sub-Acute Low Back Pain” by George and Robinson that demonstrates that patient preference for treatment may produce the best treatment response as measured by patient satisfaction than any particular RX technique. Does this mean that the best chance for improvement is that technique that the patient believes in and wants? This not the best written article, but it is potentially meaningful. HORRORS! Would a well run advertising blitz convincing patients of the effectiveness of a specific technique be the best way to help patients with LBP or other problems?

    Neil O'Connell Reply:

    Thanks John (or johnbarb?). Neat interesting little study – similar effects have been shown in acupuncture studies (we’ve referenced them in previous posts – see “missing razor”. Still the effect while significant looks fairly small in terms of pain and disability scores.

  13. Jono Stephens says:

    Thanks for posting this Neil,

    I’m confused by these results- doesn’t it seem odd that every line (including non-treatment lines) flattens off and remains fairly constant at that level from about 20 weeks right through to 50 weeks? Are people in treatment arms still receiving therapy during this period?

    Why would non-treatment, placebo and active therapy all show the same pattern?

    Why are there no therapies that show improvement for more than 20 weeks and no natural improvement groups that continue to improve beyond 20 weeks?

    Also it strikes me as odd that very few patient groups get worse again after their initial improvement during the follow-up period.

    Maybe I simply haven’t understood the study properly but this seems to be a very unusual result. I’d love to hear more thoughts about it.

    Neil O'Connell Reply:

    Hi Jono,

    Most trials run treatment for around 6 weeks or so so those patients will not have been receiving treatment over the long term – what you are looking at is the lopng term follow up measures for the trials.

    The authors of the review suggest that treatments seem equivocal because they may be working by a common mechanism rather than their purported mechanisms – namely non specific treatment effects. This is a resonable conclusion but there is another possible explanation I refer to hinted at by what is currently limited data from no-treatment groups.

    The lack of improvement after a certain time is consistent with the vast body of observational study data on the history of low back pain. The classic history is a rapid and substantial improvement (on average) that plateaus at 3 months and remains that way. Obviously that data is from groups, not individual patients.In those groups some patient will have flare ups, some will settle and many will remain the same – this averages out to a plateau.

    Jono Stephens Reply:

    Thanks Neil,

    Appreciate the clarification. It’s a pretty challenging result and certainly raises some strong and unsettling questions. Good food for thought.


  14. Fascinating data. The data for chronic back pain particularly interests me, because so many patients and therapists will vehemently protest that it cannot possibly be a coincidence when a treatment finally seems to have worked where others failed. I’m interested in both what’s actually going on, as well as how to respond rhetorically to that.

    I wonder: could there be reasonably healthy minority of exceptions? Treatments that really did work, but not often enough to be considered effective? If so, it could really muddy the waters! For instance, suppose for the sake of argument that one case in ten involves a genuine therapeutic effect — the variables work out, the planets align — and a patient actually gets better because of the intervention. Such an effect would tend not to show up in trials, because a treatment with a 10% success rate really isn’t a very good treatment. But it WOULD be great raw material for the confirmation bias of the therapist. A few genuine successes would be mentally lumped together with recoveries that were coincidental, and/or due to non-specific effects, not actually successes (confusing patient satisfaction with positive outcome), etc. They would be a powerful intermittent reinforcement for therapists’ egos. 😉

    Neil O'Connel Reply:

    What you describe would essentially be a treatment with a “number neeeded to treat” of 10. Certainly possible but you need a big old trial to show it and if I were commissioning/ funding care it wouldn’t be all that attractive! I agree that such a result would muddy the waters but in principle you don’t even need an NNT of 10 or 15 in order to make a treatment appear effective. Since most patients arrive in clinic at their worst, natural recovery, regression to the mean, polite patients and placebo can all conspire to create the illusion of efficacy. Thats not to say that some treatments do not work for individuals- they might.

  15. Hello, Hello hello Neil,
    This last line of your reply to Adam is in line with the advice I have been given for CRPS 2 from Surgeon, Doctors, and Physios,. Yes honestly and humbly!

    Obviously the message here is “acceptance” and “distraction” and enjoy the things you can do as often as possible even with the occasional “Ouch” Sometimes we can do more than we thought possible, just by imagining that activity will be good for us, non-activity bad! Love your messages. Very honest..

    Neil O'Connell Reply:

    Thanks Jo,

    Thats good to hear as for me that kind of honesty is the heart of informed consent. I think it is not so often given in back pain (yet) bUt hopefully in time…

  16. Adam Bjerre says:

    Thanx for the great insight, Neil. But where does that leave me with the patient with back pain I’m seeing tomorrow? Should I tell him that now he’s participating in a study…? 😉

    Neil O'Connell Reply:

    Thanks Adam, nice suggestion(!). In answer I don’t know. For me the most important role that the clinician working in back pain has is to be thorough in terms of diagnostic triage – spot the red flags and specific pathologies. Beyond that the evidence is convincing for keeping active, not going to bed etc. After that advice an evidence based treatment discussion might go something like “back pain is poorly understood but we know that it’s benign and that if you remain active your prognosis is really very good. We have a few additional treatments that appear more helpful than doing nothing but we can’t be sure that they are doing what they were designed to do and over the long term, our best evidence tells us that they are unlikely to have a big influence on your outcome.”

    Not very catchy huh? But honest, evidence based and humble – which is about as confident as any of us should be at this moment?

    Adam Bjerre Reply:

    Thank you for your reply, Neil. I’m all for honesty, humility and evidence based medicine, but what strikes me as a practioner when reading this is a feeling of “you’re not really helping”…

    Don’t get me wrong. I love your posts and this blog among many other blogs/posts like Bronnies and Dianes. I have been plowing through the Topical Issues in Pain series, Butler’s and Lorimers books among many others plus quite a lot of studies to find the answers of how to help people as best as possible with their painful problem. Through your contribution among many others I’ve come to deeply respect the fact that we as practitioners probably have very little influence on some of the processes and mechanisms that we traditionally thought we had and that is causing quite a stir for a lot of us. But somehow this post annoys me. Reading this observation makes me think “statistically interesting? – yes, evidence based? – certainly, humble? – indeed, helpful for how I THEN should approach my patient in the morning? – nooo….” It’s absolutely important to seperate effective treatment from sham treatment, but in doing so we might overlook some important processes that these studies in my opinion does not show – the individuality, the patient-therapist relationship, how to set up the organism to produce the opposite of a stress response and finally re-build confident physical function.

    Maybe it’s just another of these scientist/practioner-issues, but I think it’s important. I feel you deliver a lot of new questions, but not a lot of answers – and I soooo desperately want some answers… Please… 🙂

    Sorry for the long post.


    Neil O'Connell Reply:

    Hi Adam,

    I understand your frustration (it is not my intention to annoy!). Working specifically with back pain research it is often hard to find a “good news story”. The reason I blogged about this paper is that I think it is a novel way of aggregating the data and as such it throws up an interesting (if not optimistic) perspective.

    In terms of then going on to suggest alternatives that is more difficult. I could perhaps say what I might do in the clinic, or give you the O’Connell pet theory on treating back pain (not that I really have one), but ultimately that would be misleading. Having said that I believe that my earlier suggestion on what to do/ say is actually a reasonable, ethical and evidence-based one. We only need to offer treatments that work – good triage, advice and education are treatments in and of themselves.

    The therapies are not short of “expert” authority figures all happy to disseminate their approach and wisdom on back pain. Ultimately these are often merely data-free arguments from authority, or elaborate theoretical and practical models based on limited laboratory evidence (that have extrapolated so far beyond what the actual evidence tells us that they too are arguments from authority). A sceptic might point to that shaky foundation as a possible reason for the apparent lack of efficacy.

    Herbert and Bo recently wrote a fantastic paper about the process of bringing new therapies into practice and everybody should read it:

    Thanks again for your comments and please do keep reading!

  17. This is fascinating, It seems to confirm what I’ve been thinking for a while now. Most back pain treatments just don’t work. As a pharmacologist, perhaps I should apologise for the fact that conventional analgesics don’t seem to work very well, Once again I have to console myself with the thought that serious medical research has been going for a mere 100 years or so, a tiny length of time by historical standards. And it is a really hard problem.

    I often think that it would be better for pain specialists to admit a bit more bluntly that there is very little they can do in many cases. Of course the quacks should admit the same thing, but there’s not much chance of that happening.

    This paper is something that should have been discovered by the NICE assessment group in their recent appallingly bad report on low back pain. See for example,

    Neil O'Connell Reply:

    Thanks for your comment David, and great to see you on our forum. The problems are numerous in back pain – so many unknowns. We can only make a confident diagnosis in about 15% of cases – the rest we classify under the wholly unsatisfactory umbrella “non-specific low back pain”. The mechanical models of back pain that have dominatded the discussion over the last 60 years have arguably failed – even the things that we know – like sitting being associated with back pain or disc damage fall at the evidence hurdle (see and ).

    As such treatment moves from fad to fad, each with our pet theory, everyone has their cherished clinical experience to validate their practice and none of us has the hard evidence from trials that would really back that up (quite the opposite!)

    The NICE back pain group wouldn’t have had a chance to see this data as it post-dates their guidelines substantially and noone has done a review quite like this in back pain, but your point about admitting that there is not much available is well made. Ben Goldacre said something about how the process of rational disinvestment from failed treatments is roundly ignored and ulitmately there is the human element that pain medics, physios, and quacks (even well meaning ones) all find it hard to do nothing.