Article Text
Statistics from Altmetric.com
Pragmatic trials are often comparatively weak research tools
Clinical trials testing the value of therapeutic interventions range between the two extremes of maximal internal or external validity. The former study, often called an efficacy trial, tests a treatment under optimal (that is, near laboratory) conditions, in which features such as the patient sample, the intervention, and the outcome measures are narrowly defined. Their research question is usually whether the treatment is efficacious when used under optimal conditions. Unfortunately this can mean that the findings are not generalisable—that is, the results of such studies cannot necessarily be transferred to the wider population.
At the other end of the spectrum, pragmatic (practical or effectivity) trials are aimed to approximate the reality of clinical practice. Pragmatic trials have been defined as “trials for which the hypothesis and study design are formulated based on information needed to make a decision”.1 The test treatment is often compared with clinically relevant interventions or with no treatment rather than with a placebo. Alternatively, the test treatment, in addition to usual treatment, is compared with usual (often diverse) treatment alone. Other features of pragmatic trials include diverse study populations and a range of outcome measures.1
Advocates of pragmatic trials claim that such studies are preferable because they provide more meaningful information upon which to base decision making in health care. Even though this claim looks reasonable at first glance, we dispute it and believe that, while pragmatic trials may approximate more closely to the day to day clinical situation in which patients are treated, the results they produce are frequently next to meaningless. Essentially this is because their outcomes, positive or negative, can be interpreted in more than one way. In some medical areas, for example complementary medicine, pragmatic trials tend to be conducted by practitioners or others with a strong interest in promoting their therapy. In such instances, the weak design and scope for “spin” in interpreting results render pragmatic trials highly susceptible to bias. Under such circumstances, pragmatic trials can resemble propaganda tools more than science.
Two schematic examples may illustrate our arguments. Imagine a trial where patients are randomised into one group receiving advice to use homoeopathy for a specific complaint, for example, headache, while the control group receives no such advice. Arguably such a design mimics the “real life” situation. Clinicians are not usually faced with the choice between homoeopathy and placebo but want to know whether recommending homoeopathy to their patients yields better outcomes than not recommending it. On the surface, such a trial would therefore seem reasonable.
On closer scrutiny, you realise that such a study design is strongly biased towards a positive outcome—that is, a false positive result in favour of homoeopathy. If this trial is conducted in a general climate where the media inform us on a daily basis about the benefits of complementary medicine, where homoeopathy has a touch of the exotic, where homoeopathy is promoted as an entirely safe therapy, and where VIPs and royalty are regularly reported to depend on the virtues of their homoeopath, you can predict patients’ expectations to be high. As there is no attempt to control for expectations, for example, through blinding and use of a placebo, patients’ expectations, the Hawthorn effect, and a placebo response will all work in concert to produce a “positive” result.
Imagine the setting changing, for example, because homoeopathy is no longer “flavour of the month”. Re-running the same trial might then generate a completely different result—that is, it might show that there is no difference in outcome between recommending or not recommending it. In other words, the main advantage of a pragmatic trial, its generalisability, can be entirely lost if the “environment” of the trial changes. To put it bluntly, the information gained from such an exercise could approach zero.
The second scenario is a study of the usefulness of offering all patients of a given GP practice, irrespective of their condition, a range of complementary therapies. In such a study, patients will be able to opt for a complementary therapy of their choice, massage, reflexology, homoeopathy, or chiropractic say. Possible outcome measures are quality of life or consumption of drugs both of which apply to all medical conditions. The control group will be patients who don’t take up the offer of complementary therapy. Such studies mimic what is already happening in “real life” and therefore evaluate current practice.1
Investigations along these lines are even more likely to produce a positive but meaningless result. In addition to the confounding factor of patient expectation, we are here confronted with a powerful selection bias: patients are asked to choose their treatment. Any positive outcome could easily be attributable to that bias alone, particularly in the case of the highly subjective quality of life measure. To put it bluntly again, such trials resemble the situation where a market researcher seeks to establish public opinion about fast food by interviewing customers outside a fast food outlet! The sample is unlikely to be representative and the answer is certain to be biased.
The whole point about doing a clinical trial is to answer a research question about causal inference,2 for example, does my treatment cause an effect. If causality is not the issue, other research tools are probably better suited for the question at hand. Pragmatic trials, however, tend to neglect causality: their results can be interpreted in more than one way and the causal link between treatment and any observed clinical outcome becomes weaker and weaker as more and more characteristics of an efficacy trial are abandoned. When designed by proponents of a given therapy, pragmatic trials are often designed such that they inevitably produce a positive result. In that sense, they represent a waste of money and effort, and misuse “science” to prove rather than test a hypothesis.
In conclusion, pragmatic trials are often comparatively weak research tools. They should not be used as an alternative to efficacy trials. When designed carefully they can complement such studies in testing the usefulness of a treatment in “real life” once its efficacy has been established.
Pragmatic trials are often comparatively weak research tools