"There is no evidence that jumping out of a plane with a parachute improves outcome." "If you go to PubMed, you will find no publications that breathing air is good for you." "There'll never be a trial, we are beyond that."
Have you ever heard these three statements being made when someone discusses the evidence about a particular new (or old) therapy? The statements might be true, but are they useful? Do they advance an argument? What do they mean?
A paper from the Christmas 2018 Edition of the British Medical Journal found no evidence of benefit from parachute use when people jumping out of an aeroplane (which happened to be stationary at ground level) were randomized to wearing either a parachute or an empty North Face rucksack.
This evidence built on a BMJ 2003 systematic review that found there was no randomized evidence on the usefulness of parachutes for high altitude exits. Both these articles have a somewhat tongue-in-cheek style, but they make the point that "... under exceptional circumstances, common sense must be applied when considering the potential risks and benefits of interventions."
It is self-evident that wearing a parachute when jumping out of a plane in flight, or of being in an atmosphere with enough air to breathe, is good for you. When people quote arguments about parachutes or air (or similar) in response to a query about a lack of evidence for a particular intervention, they are implying that the intervention they are discussing is similarly self-evidently safe, effective, or cost-effective and that common sense must be applied.
Evidence base for an intervention
The issue is that the benefits of most medical interventions are clearly not in this category of being self-evident.
In my own field, it is not self-evident that dosimetric methods will improve the outcomes of selective internal radiation therapy sufficiently to make a difference to trial outcomes; that endovascular intervention for acute deep vein thrombosis improves long-term outcomes compared with anticoagulation; and that for complex aneurysms, endovascular aneurysm repair is better than open surgery or conservative management. I could go on.
And here we come to the crux of the matter, which is that such comments add nothing to a discussion about an intervention's evidence base. Rather, their effect is to stifle debate into a confused silence.
Whether this is done intentionally or out of embarrassment is irrelevant. The effect is the same: Intellectual curiosity is suppressed and questioning is discouraged. This is the opposite of the empiricism that underpins the whole of Western scientific thought. Before people asked questions, it was self-evident that the Earth was flat, that it was the center of the universe, and that it was orbited by the sun. That was just common sense.
A strategy related to appeals to common sense is the "weaponization" of the weight of collective opinion.
Clinical trial design is dependent on equipoise, meaning clinicians do not know which of several options is better. Equipoise is dependent on opinion, and opinion is swayed by much more than evidence. Medical professionals are just as receptive to marketing, advertising, fashion, and halo bias as anyone. Nihilistic statements denying a trial is possible (or even desirable) on the grounds that an intervention has become too popular or culturally embedded are only true if we allow them to be.
The role of senior "key opinion leaders" is critical here: They have a responsibility to openly question the status quo, to use their experience to identify and highlight the holes in the evidence, and to point out the "elephant in the room." But too often these leaders (supported in some cases by professional bodies and societies) become a mouthpiece for industry and vested interest, promoting dubious evidence, suppressing debate, and inhibiting intellectual curiosity.
There are notable examples of trials overcoming the hurdle of entrenched clinical practice and assessing deeply embedded cultural norms; see, for example, the Lancet article from 2017. This requires committed leaders who create a culture where doubt, equipoise, and inquiry can flourish.
The need for "realistic uncertainty"
Given the rapid pace of technological development in modern healthcare, it is not unreasonable to have an opinion about an intervention that is not backed by the evidence of multiple congruent randomized controlled trials. But this opinion must be bounded by a realistic uncertainty.
A better word for this state of mind is a reckoning. To reckon allows for doubt. Instead, when an opinion becomes a belief, doubt is squeezed out. "Can this be true?" becomes "I want this to be true," then, "It is true," and ultimately, "It is self-evidently true." Belief becomes orthodoxy and questioning becomes heresy and is actively (or passive-aggressively) suppressed.
Karl Popper's theory of empirical falsification states that a theory is only scientifically valid if it is falsifiable.
In his book on assessing often incomplete espionage intelligence, David Omand (former head of the U.K. electronic intelligence, security, and cyber agency, GCHQ) comments that the best theory is the one with least evidence against it.
A powerful question, therefore, is not: "What evidence do I need to demonstrate that this view of the world is right?" but the opposite: "What evidence would I need to demonstrate that this view of the world is wrong?"
Before the second Gulf War in 2002-3, an important question was whether Iraq had an ongoing chemical weapons program. As we all know, no evidence was found (before or after the invasion). The theory with the least evidence against it is that Iraq had, indeed, destroyed its chemical weapons stockpile. More prosaically, that all swans are white is self-evident until you observe a single black one.
If someone is so sure that an intervention is self-evidently effective, pointing out an experimental design to test this should be welcomed, not seen as a threat. But belief (as opposed to a reckoning) is tied up in identity, self-worth, and professional pride. What then does an impassioned advocate of a particular technique have to gain from an honest answer to the question, "What evidence would it take for you to abandon this intervention as ineffective?" if that evidence is then produced?
Research challenges
Research is hard. Even before the tricky task of patient recruitment begins, a team with complementary skills in trial design, statistics, decision-making, patient involvement, data science, and many more must be assembled. Funding and time must be identified. Colleagues must be persuaded that the research question is important enough to be prioritized amongst their other commitments.
This process is time-consuming and expensive, and it often results in failure. But this is not to say that we should not try. We are lucky in medicine that many of the research questions we face are solvable by the tools we have at our disposal if only we could deploy them rapidly and at scale.
Unlike climate scientists, we can design experiments to test our hypotheses. We do not have to rely on observational data alone.
As I visit device manufacturers at medical conferences, I wonder: what if more of the resources used to fund the glossy stands, baristas, and masseuses were channeled into rigorous and independent research? Generating the evidence to support what we do would then be so much easier.
And I also wonder why we tolerate a professional culture that so embraces orthodoxy, finds excuses to not undertake rigorous assessments of the new (and less new) interventions we undertake, and is happy to allow glib statements about trial desirability, feasibility, and generalizability about parachutes and air to go unchallenged.
Dr. Chris Hammond is a consultant vascular radiologist and clinical director for radiology at Leeds Teaching Hospitals NHS Trust, Leeds, U.K.
The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnieEurope.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.