Thanks to Richard Lehman’s blog this week, I’ve been reading a fascinating review of the advances made by EBM written by two of the co-founders of the movement. Both Benjamin Djulbegovic and Gordon Guyatt have made serious contributions to the growth of EBM, as they charmingly acknowledge in their declaration of interest section from their review paper:
Author: Brendan Clarke
We’ve been thinking about possible responses to the EBM manifesto (EBMM) that was published after this year’s Evidence Live conference. We are enthusiastic about the manifesto as a way of improving the theory and practice of EBM. That said, we think that improvements are possible, particularly in making evidence more helpful in making clinical decisions, as we set out below.
I think that every academic has had the dispiriting experience of writing what they thought was a good paper, only to have great difficulty publishing it. This post is about one of these – which has been hanging about in my “do something with this, someday” file for four years or so.
I was particularly pleased to read the recent BMJ paper by Margaret McCartney et al, which discussed ways that EBM-based guidelines might be made to better serve the need of individual patients. I’ll urge you to read the whole thing (it’s not long at all), but here, I want to develop one of their themes a bit.
Their argument is that guidelines have become transformed into tramlines, or excessively inflexible rules determining the correct treatment for an individual.
They make other arguments too – such as that guidelines “exceed the limitations of the evidence for many people”. Although this raises an subsidiary point about the disparity of power between clinicians and patients, it gets us into epistemically difficult territory too – that guidelines, these paragons of objectivity and reliability – may end up as an expression of expert option rather than evidence. A startling factoid from this discussion – “Only 11% of American cardiology recommendations are based on high levels of evidence, with 48% based on the lowest level of evidence and expert opinion.“)
Back to the tramline argument: the thumbnail sketch here is that guidelines have been diverted from their original purpose of reducing variation in patient care (A Good Thing) to an authoritarian and bureaucratic imposition that threatens patient autonomy and clinical judgement (A Bad Thing). McCartney et al don’t present things in this cartoon-badman way, and it’s not a position that I would straightforwardly endorse, but there’s something to be said for taking a more subtle form of this transition seriously. For example, General Practitioners in the UK are assessed by a series of targets called the Quality and Outcomes Framework (or QOF, to its friends). While these guidelines are optional (i.e. GPs don’t need to enrol on the scheme),they are linked to financial incentives – dubbed ‘pay-for-performance’. It is therefore in the financial interest of GPs to fulfil as many of the QOF indicators as possible in as many of their patients as possible. The indicators themselves are not terribly controversial. For example, they say that GPs should measure and reduce cholesterol in people with heart disease:
The percentage of patients with coronary heart disease whose last measured total cholesterol (measured in the preceding 12 months) is 5 mmol/l or less.
Given the link between pay, and fulfilling this indicator, it’s not hard to paint the QOF as a tool that aims at compliance, rather than one that aims to make individuals healthier. As Richard Lehman recently blogged, the QOF doesn’t seem to make people live longer.
Note that this isn’t an argument against all one-size guidelines. Some of these (flu vaccination is the example cited by the authors) make sense both for individuals and for groups. But sometimes, the best option for an individual might not coincide with the best option for the group. ACE inhibitors work well to control high blood pressure, but cause severe cough in about 20% of patients that take them. For groups that are fond of the sound of their own voice (like, you know, university lecturers) this side-effect is particularly disabling, and an alternative antihypertensive drug might be a better option, even if they are fractionally less effective at reducing BP than an ACE inhibitor.
The point that I take from this paper is that combining patient preferences with clinical evidence is highly desirable, but complicated. The ACE-inhibitor example above appears dead simple, but developing tools to assist decision-making that really takes account of patient preference that is generally fit for purpose is much harder. How can clinical standards be maintained, with as little unwarranted variation in standards of care as possible, while accommodating individual patient needs, values, and preferences?
McCartney et al appear to agree with this. Their recommendations, at the end of the paper, as that “Patient decision aids should be published in tandem with guidelines, but better research is required into how to provide information about choices that is easily and quickly understood”.
Words by Brendan Clarke
I’m speaking at the EBM+ project meeting at UCL on Monday on a topic that I’ve been working on for a couple of months now. Very briefly, the talk is about Wigmore charts, and ways that we might use them to support clinical decision-making.
Wigmore charts (like the example here that I’ve borrowed from Anderson, Schum and Twining’s excellent 2005 book) were originally designed to support complex legal arguments. Imagine that you are trying to build a complex legal case: trying to convict someone of fraud, say. Wigmore charts are a tool for showing how these complicated legal argument works. Here, the “ultimate probandum” is the legal verdict that you are trying to reach (in this case, something like “x knowingly defrauded y”). The chart shows the steps of the legal argument that support this final verdict, all the way down to the many pieces (often, in court cases, many thousands of them) on which the case is build.
My current thought is that these inferential networks would be useful in medicine too, particularly when dealing with complex decisions about evidence. I think that we might use Wigmore charts, or something similar, as an heuristic (see Chow’s recent BJPS paper for a cracking introduction). But to say more would give the game away.
You can have a look at my slides here – [2mB .pdf].
I’d originally planned to write something this week on the announcement that the Nobel prize in Physiology/Medicine has been awarded to Campbell, Ōmura and Tu. While there’s lots of possible interest here – the Neglected Tropical Disease angle, or the unusual military aspect to be found in the intellectual history of Tu’s work on artemisinin. However, I’ve been distracted by something that came out of S. Lochlann Jain’s excellent new-ish book Malignant: How Cancer Becomes Us, which I’ve been avidly reading this week.