When did you last make a mistake? Maybe you had an accident in the car, left a tap running and flooded the house, or made a bad investment. How did that feel?
Life is full of risks. We try to engineer them or their effects out as much as possible: We wear seatbelts, lie our infants on their backs at bedtime, tolerate airport security, and buy insurance. When something bad happens, even when it is potentially avoidable, we know that doesn't mean the person making the mistake was necessarily irresponsible or reckless.
What about at work? When did you last make a mistake at work? Have you missed a cancer on a chest radiograph, caused bleeding with a biopsy needle, or forgot to add an alert to a time-sensitive finding? Were you subject to an investigation or regulatory process? How did that feel? Did it feel different?
Medicine is a risky business. Sometimes error is avoidable, but some error is intrinsic to the operational practicalities of the delivery of modern healthcare.
The missing of a small abnormality on a few slices of a CT scan containing thousands of images is a mode of error genesis that continues despite most radiologists being painfully aware of it. Mitigations to reduce the rate of occurrence (e.g., comfortable reporting workstations, absence of interruption, reduced workload and pressure to report, double reporting, and perhaps artificial intelligence assistance) are neither infallible nor always operationally realistic. Double reporting halves capacity. While we design processes to reduce risk, it's impossible to engineer error out completely, and other models are needed.
To make error productive, we learn from it where we can, but we must recognize that sometimes there is nothing to learn or that the lessons are so repeated and familiar that it might surprise an independent observer that the error persists; "never events" still happen.
Fear of error
If risk and error are intrinsic to what we do in healthcare, why then do we seem to fear error so much?
The language we use about medical error is replete with emotionally laden and sometimes pejorative terms: negligence, breach of duty, substandard, avoidable, gross failure. Is it any wonder then that the meaning healthcare professionals sometimes adduce to adverse event investigation outcomes is threat, personal censure, and condemnation?
The language frames the nature of the response: If negligence or substandard care has resulted in avoidable harm, there is an associated implication that the providers of that care were negligent or willfully blind to it. Most healthcare professionals I know perceive themselves as striving to do their best for their patients, so this implication clashes with self-image, motivation, and belief.
Fear of error is compounded by the manner in which error has historically been investigated -- and how courts manage claims.
Retrospective case review occurs when it appears something has gone wrong in a patient's care and sometimes determines that an error was "avoidable." Such review is inevitably biased by hindsight and frequently by a narrow focus on the individual error and its harm without contextualizing this within the wider workload or operational pressures prevailing at the time the error was made. Not noticing a small pneumothorax after a lung biopsy might be due to carelessness, or it might be because the operator was called away suddenly to manage a massive hemoptysis in recovery on a previous patient.
It's easy to be wise after the event, to suggest a different course of action should have been taken, but, again, this jars with our lived experience of making sometimes high-stakes decisions in sometimes pressured situations with frequently incomplete information. More enlightened modern investigatorial processes understand this and are thankfully becoming increasingly commonplace in healthcare.
Personal failure?
Too often we continue to perceive error as a personal failure, a marker of poor performance or incompetence, a point at which we could or should have done better.
The individual who is identified at the point when a latent failure becomes real is often well placed to describe upstream failures and process violations that led to the error, and the culture that allowed these violations to become normalized. In addition to the personal cost, focusing on personal failure means this individual is marginalized, their view dismissed, and their intelligence lost.
Thinking of this individual as a "second victim" instead, rather than as a perpetrator, is helpful: Patient and professional are both casualties. Such a view is by definition nonaccusatory and is a neutral starting point for an inquisitorial assessment of why an error occurred.
Recognition that some error is unavoidable still allows for patients to be compensated when things go wrong. An organization or individual may be liable for providing compensation even if they are not deemed responsible for the harm. The idea of liability as distinct from blame is familiar to us: It's why we buy third party insurance for our cars. Some collisions are clearly due to negligent driving. Many are not, but we are nevertheless liable for the consequences.
In the U.K., healthcare organizations are liable for the care they provide and are insured for claims for harm. For a patient to access compensation, legal action (or the threat of it) is required which inevitably results in an assessment of blame, conflates liability with culpability, and does nothing to promote a no-fault culture. The insurance is named "Clinical Negligence Scheme for Trusts," explicitly reinforcing the unhelpful notion that compensable error is de-facto negligence.
Even ultrasafe industries like aviation have "optimizing violations" (pilots refer to this as "flying in the grey"): there's always a reason not to go flying. In healthcare, we don't get this choice: Error is an inevitable consequence of the societal necessity for providing complicated healthcare to ill, frail people. The only way to avoid it is to not provide the care. We can only learn in an environment that is supportive when error occurs, understands that error is not a reflection of professional competence, and embraces it as a potential opportunity to get better but does not punish. Without this, our practice will become beleaguered and bunkered, shaped by the fear of censure rather than what is technically, practically, and ethically the right thing to do.
Our regulators, legal system, and investigatory processes have been slow to embrace the idea that some error is inevitable. They have much to learn from industries such as aviation. In the meantime, it remains hard to be content with the notion that an error in your practice is frequently merely a reflection that you work in a risky business.
Dr. Chris Hammond is a consultant vascular radiologist and clinical lead for interventional radiology at Leeds Teaching Hospitals NHS Trust, Leeds, U.K.
The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnieEurope.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.