Clinical data registries are a big thing right now. Go to a conference and you'll see multiple presentations describing a new device or technique as "safe and effective" on the basis of "analysis of prospectively collected data." But are they merely the emperor's new clothes?
When properly designed and funded and with a clear purpose and goal, registries are powerful tools in generating information about the interventions and examinations we perform, yet I feel very uneasy about many registries because they often have an unclear purpose, are poorly designed, and are inadequately funded. At best, they create data without information. At worst, they cause harm by obscuring reality or suppressing more appropriate forms of assessment.
A clear understanding of the purpose of a registry is crucial to its design. Registries work best as tools to assess safety. However, in a crowded and expensive healthcare economy, this is an insufficient metric by which to judge a new procedure or device. Evidence of effectiveness relative to alternatives is crucial. If the purpose of a registry is to make some assessment of effectiveness, its design needs to reflect this.
Main shortcomings of registries
The gold standard tool for assessing effectiveness is the randomized controlled trial (RCT). These are expensive, time-consuming, and complex to set up and coordinate.
As an alternative, a registry recruiting on the basis of a specific diagnosis (equivalent to RCT inclusion criteria) is ethically simpler and frequently cheaper to instigate. While still subject to selection bias, a registry recruiting on this basis can provide data on the relative effectiveness of the various interventions (or no intervention) offered to patients with that diagnosis. The registry data supports shared decision-making by providing at least some data about all the options available.
Unfortunately, most current interventional registries use the undertaking of the intervention (rather than the patient's diagnosis) as the criterion for entry. The lack of data collection about patients who are in some way unsuitable for the intervention or opt for an alternative -- such as conservative management -- introduces insurmountable inclusion bias and prevents the reporting of effectiveness and cost-effectiveness compared with alternatives.
The alternatives are simply ignored or assumed to be inferior. Safety is blithely equated with effectiveness without justification or explanation. Such registries are philosophically anchored to the interests of the clinician (interested in the intervention) rather than to those of the patient (with an interest in their disease). They are useless for shared decision-making.
This philosophical anchoring is also evident in choices about registry outcome measures which are frequently those which are most easy to collect rather than those which matter most to patients: an perfect example of the McNamara (quantitative) fallacy. How often are patients involved in registry design at the outset? How often are outcome metrics relevant to them included, instead of surrogate endpoints of importance to clinicians and device manufacturers?
Even registries in which the ambition is limited to the assessment of safety or postintervention outcome prediction (and in which appropriate endpoints are chosen) are frequently limited by methodological flaws. Lack of adequate statistical planning at the outset and collection of multiple baseline variables without consideration of the number of outcome events needed to allow modeling, risks overfitting, and shrinkage -- fundamental statistical errors.
Systematic inclusion of "all comers" is rare, but failure to include all patients undergoing a procedure introduces ascertainment bias. Global registries often recruit apparently impressive numbers of patients, but scratch the surface and you find rates of recruitment that suggest a majority of patients were excluded. Why? Why include one intervention or patient but not another? Such recruitment problems also affect RCTs, resulting in criticisms about "generalizability" or real-world relevance, but it's uncommon to see such criticism leveled at registry data, especially when it supports preexisting beliefs, procedural enthusiasm, or endorses a product marketing agenda.
The issue of funding
Another important consideration is funding.
Whether the burden of funding and transacting postmarketing surveillance should fall primarily onto professional bodies, the government, or on the medical device companies that profit from the sale of their products is a subject for legitimate debate, but in the meantime, registry funding rarely includes the provision for systematic longitudinal collation of long-term outcome data from all registrants.
Pressured clinicians and nursing staff cannot prioritize data collection because of time and funding limitations. Rather, the assumption is, for example, that absence of notification of adverse outcome automatically represents a positive. Registry long-term outcome data is therefore frequently inadequate.
While potential solutions such as linkages to routinely collected datasets and other "big data" initiatives are attractive, these data are often generic and rarely patient focussed. The information governance and privacy obstacles to linkages of this sensitive information are substantial.
The U.K. situation
In the U.K., national organizations like the Healthcare Quality Improvement Partnership (HQIP) and the National Institute for Health and Care Excellence (NICE), as well as professional societies such as the British Society for Interventional Radiology and the medical device industry promote registries, often enthusiastically.
The IDEAL collaboration is an organization dedicated to quality improvement in research into surgery, interventional procedures and devices. It has recently updated its comprehensive framework for the evaluation of surgical and device-based therapeutic interventions. The value of comprehensive data collection within registries is emphasized in this framework at all stages of development, from translational research to postmarketing surveillance.
First Do No Harm, Baroness Cumberledge's report into failures in long term monitoring of new devices, techniques and drugs identified a lack of vigilant long-term monitoring as contributing to a system that is not safe enough for those being treated using these devices and techniques. She recommended that a central database be created for implanted devices for research and audit into their long-term outcomes.
Innovative modern trial methodologies such as cluster, preference, stepped wedge, trial within cohort, and adaptive trials provide affordable, robust, pragmatic, and scalable alternative options for the evaluation of novel interventions and are deliverable within a National Health Service environment. However, registries are still likely to have an important role to play.
HQIP's Proposal for a Medical Device Registry includes key principles for registry development including patient and clinician inclusivity, governance, and ease of routine data collection using electronic and digital systems.
The way forward
Where registries are conceived and designed around a predefined specific hypothesis or purpose, where they are based on appropriate statistical methodology with relevant outcome measures, are coordinated by staff with the necessary skillsets to manage site, funding, and regulatory aspects and are budgeted to ensure successful delivery and data collection, then they can be powerful sources of information about practice and novel technologies.
This is a high bar, but is achievable: the use of registry data during the COVID-19 pandemic has highlighted this. Much effort is being expended on key registries, such as the U.K. National Vascular Registry, to try to improve the quality and comprehensiveness of the data collected and create links to other datasets. But where these ambitions are not achieved, we must remain highly skeptical about any evidence registry data purports to present. Fundamentally, unclear registry purpose, poor design, and inadequate funding will guarantee both garbage in and garbage out.
Where does this analysis leave us?
Registry data is everywhere. Like the emperor's new clothes, is it something you accept at face value, uncritically, because everyone else does? Are you willingly blind to the implications of registry design if the data interpretation matches your prejudice?
Instead, perhaps next time you read a paper reporting registry data or are at a conference listening to a presentation about a "single-arm trial," be like the child in the story and puncture the fallacy. Ask whether there is any meaningful information left once the biases inherent in the design are stripped away.
Dr. Chris Hammond is a consultant vascular radiologist and clinical lead for interventional radiology at Leeds Teaching Hospitals NHS Trust, Leeds, U.K.
The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnieEurope.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.