Statistics from Altmetric.com
The articles by Helfenstein1 and Newcombe2 highlight the difficulties faced by clinicians in making a treatment decision for their patient when confronted by contradictory evidence. We are now faced with an ever increasing array of data of variable quality, which all need to be considered, for us to reach the best treatment decision for our patients. Systematic reviews, the cornerstone of evidence based medicine, are an important and increasingly utilised tool that use predefined objective criteria to aggregate data from trials, to provide evidence on which to base clinical decisions. However, systematic reviews have their own problems. Some, such as the finding of increased mortality with the use of intravenous albumin,3 have been controversial and heavily criticised.4 Dr Helfenstein highlights another problem with meta-analysis1—that is, depending on the model chosen the interpretation of an individual trial within a meta-analysis can vary. Most clinicians, like myself, will not be familiar with the statistical techniques utilised in meta-analysis. Thus, the issues highlighted in these two articles1, 2 will add further to the confusion felt by many.
Given these problems, what evidence should we rely on to make a clinical decision for our patients? Should the findings of a randomised controlled trial that are contrary to the findings of a meta-analysis take priority in our decision making process? Clearly, there are no simple answers to these questions. We should certainly not go back to the days when treatment was based solely on personal anecdotal experience and disregarded good trial evidence. A rigid hierarchy on which to base a clinical decision has been proposed: in this model, case reports are at the bottom end of the scale while randomised controlled trials and systematic reviews are at the top end of the scale.5 However, there may be situations where the rigid application of this hierarchy is inappropriate. For example, observational studies may provide evidence that is as good, if not better, than that provided by randomised controlled trials.6 Certainly, this is the current situation when one is focusing on the harms caused by medicines, where randomised controlled trial evidence is singularly absent or unreliable. Similarly, poorly conducted randomised controlled trials, which are then included in a systematic review, can produce erroneous and contradictory results.7 By contrast, a good single randomised controlled trial can over-turn many years of conventional “wisdom” that may have been based on observational data. For example, a meta-analysis of observational studies suggested that hormone replacement therapy (HRT) was associated with a 50% reduction in the relative risk of coronary events.8 Conversely, the single HERS randomised controlled trial found no benefit of HRT in secondary prevention of coronary events.9 The findings of HERS have been supported by angiographic studies,10 and the evidence taken together, has led the American Heart Association to no longer recommend the use of HRT in secondary prevention of coronary artery disease.11 Finally, it is also important to consider whether results from trials, where patient recruitment is often dependent on a long list of specific inclusion and exclusion criteria, are applicable to an individual patient in a real-world situation.12 All these factors therefore have to be considered in making a clinical decision; thus, in my opinion, the answer to the problem highlighted by Helfenstein is not simple,1 but crucially depends on a critical appraisal of the characteristics of the evidence that is available.