Is it time for radiology to rethink its approach to AI?

Radiology's relationship with AI must be a balanced and considered partnership that enhances human abilities to provide better healthcare, according to a wide-ranging and visionary commentary posted on 2 August in European Radiology.

Dr. Susan Shelmerdine, PhD.Dr. Susan Shelmerdine, PhD.

"We should navigate this relationship with patience and understanding, ensuring AI serves humanity productively, compassionately, and sustainably. This approach will help us navigate this new digital world for better or worse, richer or poorer, and in sickness and health," noted Dr. Susan Shelmerdine, PhD, consultant academic pediatric radiologist at Great Ormond Street Hospital and associate professor at the Great Ormond Street UCL Institute of Child Health, U.K.

Recognize when an AI tool no longer serves its purpose or when another tool can address the purpose more effectively, she added. "Continuous governance and regular evaluation of AI performance, assessing its relevance to clinical needs, are crucial in determining when to upgrade or replace AI systems. Proactively making these decisions prevents inefficiencies and wasted resources, ensuring that the healthcare system remains effective, and avoids perpetuating initiatives for 'old times' sake.' "

Engaging with patients to comprehend their needs is also critical for building confidence in AI and the wider healthcare system, according to Shelmerdine. Education and training to understand the limitations of the tool are essential for building trust. Equally important are managing expectations and supporting transitions, whether moving from no AI to AI, removing AI, or transitioning between different AI tools.

"The decision to adopt AI should fundamentally be driven by a clinical need to solve critical problems for the institution or society," she continued. "In more affluent nations, supported by comprehensive insurance schemes, the decision may sometimes be driven by a desire to appear innovative, offering a 'luxury' product to gain a competitive edge, regardless of the actual healthcare need."

This approach can introduce new errors in interpretation, and potentially exacerbate population health inequities. If AI-driven healthcare improvements rely solely on feedback and data from resource-rich countries, future applications will cater to a small subset of the global population, widening the gap in healthcare access and quality, she argued.

Adopting AI technologies will entail costs, even if offered on a "free trial" basis, Shelmerdine explained. "These expenses include setup costs, staff training, software maintenance, equipment upgrades, and changes to existing care pathways. While the return on investment could be promising, the initial investment can be a significant barrier. Options to mitigate this might include a pay-per-study model instead of a subscription, delaying integration until costs decrease or funds are secured, or developing a robust business case for executive approval."

Robust AI governance measures, arbitration processes, and multidisciplinary team discussions to raise collective awareness of issues can play a vital part, and adopting a medical algorithmic audit framework can promote safety and establish feedback mechanisms between AI vendors and users, she wrote.

The commentary is available here.

Page 1 of 111
Next Page