Valid how? A response to the EBM manifesto

manifesto-3-0

We’ve been thinking about possible responses to the EBM manifesto (EBMM) that was published after this year’s Evidence Live conference. We are enthusiastic about the manifesto as a way of improving the theory and practice of EBM. That said, we think that improvements are possible, particularly in making evidence more helpful in making clinical decisions, as we set out below.

1.     Introduction: aims vs. methods

We begin with a quote from the manifesto:

“There are three basic principles which are core to this manifesto, and to the action plan that will follow: transparency, education and partnership. These are essential for creating effective long term practice and policy changes for the betterment of healthcare. Importantly many actions outlined in this draft manifesto will require a new generation of leaders equipped with the skills to develop and communicate high quality evidence and eradicate poor research and publication practices.”

We think that the aims of the EBMM are laudable, appropriate, and clearly stated. Likewise, it is hard to argue with any of their individual recommendations for improvement. However, taken as a whole (as the use of the word manifesto suggests), we worry that the collective effect of these recommendations is to suggest that the main failing of current real-world EBM practice arises largely from biased trials. While we agree that tackling bias is critically important and urgent, we suggest that the emphasis of the recommendations made in the EBMM inappropriately downgrades other causes of failure, particularly those that affect the use of evidence in the clinical context.

2. What are the problems that the EBMM addresses?

The EBMM lists 20 problems “underpinning the need for better evidence for better healthcare”:

  1. Publication bias
  2. Poor quality research
  3. Evidence production problems
  4. Research more likely to be false than true
  5. Reporting bias
  6. Ghost authorship
  7. Financial and non-financial conflicts of interest
  8. Estimating costs of new treatments
  9. Under reporting of harms
  10. Delayed withdrawal of harmful drugs
  11. Lack of Shared Decision Making strategies
  12. Trials lacking external validity
  13. Regulatory failings
  14. Criminal behaviour
  15. Rise of surrogate outcomes
  16. Unmanageable volume of evidence
  17. Clinical guidelines beset by major structural problems
  18. Too much medicine
  19. Prohibitive costs of drug trials
  20. Trials stopped early for benefit (http://evidencelive.org/reasons/)

For emphasis, we agree that these problems are serious ones, and solving them is of the first importance in making medicine better. Our worry is about how these problems connect with the list of solutions that forms the bulk of the EBMM. In summary, these recommendations are:

  1. Set a clear agenda for incorporating evidence into decisions so that patients are informed and happy with the decisions that they make.
  2. Develop tools for practice that can best support and serve patient choice.
  3. Expand the role of patients in the co-design of all types of clinical research.
  4. Deliver independent trials that are transparently reported for technologies that matter to patients.
  5. Eradicate publication and reporting bias in trials.
  6. Reduce excessive costs of trials in order to promote trial replication.
  7. Increase the uptake of trial outcomes that are relevant and matter to patients.
  8. Deliver more informative systematic reviews through the wider uptake of unpublished evidence
  9. Set out a comprehensive package of what constitutes rigorous research and how to achieve it.
  10. Create an understanding and structure for the different types of research that underpin decision making that matters to patients.
  11. Develop a strategy to inform and educate policy makers and politicians about use of robust informative evidence in decision making
  12. Develop guideline processes and recommendations from only high quality evidence.
  13. Phase out guidelines that have been developed with conflicts of interest in mind and do not reflect the real world setting.
  14. Set out key principles that permit the appropriate and ethical use of routinely collected data that may provide genuine benefits to patients
  15. Work with medical journals to find ways in which users can have some form of access to all medical research.
  16. To promote better research to lay audiences and improve the training and education for better communication of research findings.
  17. Develop a clear and coherent international system for declaring and managing conflicts of interests in order to reduce their impact for healthcare decisions that matter.
  18. Improve the quality and transparency of evidence submitted to regulators and hold them to account with respect to approvals of treatments of uncertain net value.

We have attempted to connect items in the problem list with items in the solution list. This began with an attempt to classify the problems. While several ways of doing this were attempted, we have finalised a classification that distinguishes problems that are about trial validity (in some sense) from problems that are not about validity. We have then further divided the validity arm into those problems that are about internal validity from those that are about external validity.

Table 1: re-classification of the 20 problems, with solutions

Mainly about validity Not mainly about validity
Mainly internal validity Mainly external validity  
Publication bias [e, h, l, o] Lack of Shared Decision Making strategies [a, b, c] Estimating costs of new treatments [no clear recommendation]
Poor quality research [d, e, i, j, l] Trials lacking external validity [d, g, m, n] Delayed withdrawal of harmful drugs [r]
Evidence production problems [d, h, i, m, o, r] Rise of surrogate outcomes [g] Regulatory failings [d, g, i, r]
Research more likely to be false than true [d, e, i, j, q] Unmanageable volume of evidence [no clear recommendation] Criminal behaviour [d, i, m, q]
Reporting bias [d, e, g, i, j, q] Trials stopped early for benefit [e, g, l] Too much medicine [g]
Ghost authorship [d, e, i, l, m, q] Prohibitive costs of drug trials [f]
Financial and non-financial conflicts of interest [d, e, i, l, m, q]
Under reporting of harms [d, e, g, i]
Clinical guidelines beset by major structural problems [d, e, i, l, m, q] – this this is a species of financial and non-financial conflicts of interest

We think that this re-classification is in itself conceptually worthwhile just because it gives a better guide to action – for example, by permitting researchers to identify problems in need of solutions. On this, it is remarkable that some recommendations are referenced many more times than others:

Number of references Recommendations
12 d
10 i
9 e
7 g
6 l, m, q
3 j, r
2 h, o
1 a, b, c, f, n
0 k, p

It is likewise remarkable that those problems that are mainly concerned with internal validity are much more richly supplied with solutions (48 total – 8d, 8e, 2g, 2h, 8i, 3j, 5l, 4m, 2o, 5q, 1r) than either those problems that are mainly about external validity (12 total – 1a, 1b, 1c, 2d, 1e, 3g, 1l, 1m, 1n, 1 no clear recommendation) or non-validity (12 total – 2d, 1f, 2g, 2i, 1m, 1q, 2r and 1 no clear recommendation).

This analysis is therefore revealing of our worry about the skew of the overall list of recommendations. By far the majority are concerned with internal validity. Yet, as the taxonomy above suggests, more of the problems (11 of 20) are not concerned with internal validity, but with a combination of external validity and pragmatic factors.

3. When solutions do not fit problems

This is a serious problem for two (related) reasons. The first is conceptual: the myth of epistemic purity. The idea here is that, if only a trial (or meta-analysis, or whatever) can be carried out with sufficient rigour, then with sufficient safeguards against bias and improper interests, and accompanying statistical rigour, then all the problems of EBM will be solved. When stated baldly like this, it is so demonstrably false as to be negligible (consider, say, trying to use an excellent meta-analysis to treat a patient who does not have the disease discussed in the MA). Yet subtle traces of this problem are to be found in the manifesto.

The second point is more practical, and more subtle: call it the validity trade-off. The idea here is that, taking evidence as a whole (and we like the ecosystem metaphor used here by the EBMM authors), the greater the precision of a result. However, the more aggregation then the less informative a conclusion may be as a guide to action. This is a reasonably well-known problem already (we’d think about it in terms of the reference-class problem), and is kind-of mentioned in the manifesto already, but something that we think needs further research and though. A simple example is a trial with an extremely homogeneous study population (exclusively Finnish 38-40 year old men, for instance) is likely to produce a more ‘clean’ result that a similar study carried out in a more diverse population. Yet the second study will be more applicable to some mixed clinical population. In short, there is a trade-off between internal and external validity. This means that not all problems can be solved. Worse, it means that the unintentional privileging of internal validity will both worsen external validity, and divert attention, funding, and so on, away from approaches likely to improve external validity.

Okay, so there are plenty of limitations to the mapping that we have attempted to perform between problems and recommendations. Notably, it is fallible, relies on background knowledge to evaluate the short thumb-nail sketches of both problems and recommendations supplied in the manifesto. Thus it is clearly both somewhat subjective and disputable/fallible – especially as most of these are conceptually very complex issues.

Yet we suggest that the point that the recommendations emphasis internal validity, while the list of problems do not (or at least not to the same extent), is one that is worth taking seriously. It doesn’t matter how clean evidence is if it is either fraudulent or irrelevant. Fraud, waste, inaccuracy, low-quality statistical work and so on are failures that exemplify current concerns about EBM practice. But solving these issues of internal validity – particularly those surrounding the ways that trial populations are understood, and in the way that statistical analysis happens – come at the cost of external validity.

We would suggest that a clearer taxonomy of failure is needed. We call for this because not all problems are solvable in all context. This may often be true in a pragmatic way – you can’t get all you want all the time. But for EBM it is sometimes true in a much stronger way.  There is a tension between fixing certain pairs of problems, such that making one better threatens to make the other worse. This is particularly true of problem pairs where a member is drawn from each of the internal and external validity columns. In general, this seems to be a consequence of the validity trade-off. For example, a statistical analysis will typically be cleaner if we have a really homogeneous population, but the problem is that the conclusions that it supports won’t “matter to real world patients” (EBMM), just because these real populations are mixed. Here, the recruitment of more mixed trial populations would produce more messy trial results, but these would be of more use when guiding the care of individual patients. This means that researchers need to make tough decisions about whether internal or external validity should be priorities in a particular clinical study. We worry that this kind of difficulty is insufficiently acknowledged in the manifesto as it currently stands – we should not think in terms of solving all the problems, but instead about which we should prioritise, and in which contexts we should prioritise them.

In this, we draw on EBM methodology itself by being “conscientious, judicious, and explicit” about the ways that we understand both the theory, and the practice, of EBM.

2 Comments on “Valid how? A response to the EBM manifesto

Leave a Reply

Your email address will not be published. Required fields are marked *