AI image interpretation platforms come of age

2017 05 09 11 15 29 355 Cyber Informatics 400

Radiology image interpretation systems involving artificial intelligence (AI) have come of age over the past year or so, but human reporters such as radiologists remain the final arbitrator and interpreter of computer-generated reports and continue to hold medicolegal responsibility for interpretation.

Human interpretations take account of a patient's symptoms and signs, previous images, blood tests, histopathology report, etc. Radiologists continue to issue the human actionable report, which is personalized to the patient. This report provides a tentative or differential diagnosis and advice on the next step of management.

Dr. Neelam Dugar is a consultant radiologist at the Doncaster and Bassetlaw Hospitals NHS Trust, U.K., and informatics adviser to the Royal College of Radiologists.Dr. Neelam Dugar is a consultant radiologist at the Doncaster and Bassetlaw Hospitals NHS Trust, U.K., and informatics adviser to the Royal College of Radiologists.

Computer-generated reports can assist patient management and help nonspecialist emergency doctors make decisions on whether the patient has a fracture or a pneumothorax, for example. Emergency doctors often read images in poor lighting conditions, and they are not trained experts in the final interpretation of radiology images. They tend to be extremely busy, too. AI decision-support tools can help them enormously, but there needs to be a declaration accompanying every computer-generated report about limitations in terms of sensitivity and specificity. Doctors on the front line cannot consider such reports to be 100% accurate.

This is no different than the electrocardiogram (ECG) reading disclaimer that appears on my Apple watch. A final report should be by a human specialist who is trained in radiology image interpretation and who understands the limitations of the computer-generated report. Reports issued by the human specialist will be actionable, as per national guidelines for radiology reporting, such as the standards of the U.K. Royal College of Radiologists. They will give advice on the next step of management -- which is often dictated by local circumstances and the availability of services.

Progress on display in Vienna

At ECR 2019, AI image analysis platforms emerged as a common theme amongst PACS vendors. Most companies are incorporating an AI platform into their PACS portfolio. These platforms would contain various types of AI algorithms for image analysis. Small niche vendors and research groups are developing many of these algorithms.

In the future, experts envisage that all the images sent from the modalities to PACS will be preanalyzed by AI image platforms. These platforms will identify the modality and body parts within the study, using the DICOM header metadata, and they will apply the appropriate AI algorithm to the study.

For digital radiography, AI algorithms can detect a fracture in the appendicular skeleton, and on chest x-rays, they can detect a pneumothorax, rib fracture, consolidation, tube placement, pleural effusion, and lung nodule. On head CT studies, AI algorithms will try to detect a skull fracture, brain hemorrhage, brain infarct, brain tumor, and pneumocephalus. On brain MRI, AI algorithms can detect multiple sclerosis (MS), stroke, and a brain bleed and tumor. On body CT, AI algorithms can detect liver lesions, lung nodules, and vertebral body fracture. On mammography, AI algorithms can assess suspicious lesions or calcifications. If the algorithm detects an abnormality, it will "query and retrieve" a prior similar study for analysis and comparison.

The output from AI platforms will enhance the reporting workflow in two important ways:

  • Provide computer preanalysis of the radiology images to help inform and augment the human-generated radiology report
  • Help reprioritization of reporting worklists when an abnormality is detected

Adherence to global interoperability standards and workflow support is key to the success of AI.

Also, users must bear in mind these three standard inputs into the AI platform:

  • All DICOM studies from modalities should always go to the AI platform first, before arriving in PACS for display. This workflow will prevent inadvertent reporting of images by radiologists, before the preanalysis by the relevant AI algorithm has taken place.
  • AI algorithms must be capable of performing a DICOM "query and retrieve" from the local PACS. When an abnormality such as a lung nodule is detected, the AI algorithm should be able to do a "query and retrieve" for prior similar studies from PACS and do a comparison for rate of growth analysis. The same analogy would apply for liver lesions, MS plaques, etc.
  • An HL7 ORM (order) message from the RIS with "status complete by radiographer" will act as a trigger to start AI platform image analysis. A lag time can be introduced to allow for the study to arrive into the AI platform before image analysis begins. This will ensure that the AI platform begins analysis only when the radiographer has sent the full study to PACS.

Standard outputs from AI platforms

Standard outputs from AI image analysis platforms must include the following:

  • Graphical representation of the region of interest (of the detected abnormalities) or markups/pointers, which should always be displayed using DICOM standards so they can be viewed in the PACS viewers. There are various options in DICOM for communicating graphical outputs: DICOM SR (structured report) and DICOM Segmentation (this is preferred by the standards bodies due to its robust interoperability for both graphics and text, but adoption is slow); DICOM Presentation State (commonly used in clinical PACS, it has the toggle on/off option for the graphics); DICOM Overlay (this, too, has the toggle on/off option for graphics); and DICOM Secondary Capture (burned in pixels, this is of limited functionality and has no toggle on/off feature).
  • Abnormality classification in DICOM as text data sent as DICOM SR -- e.g., fracture, hemorrhage, consolidation, infarct, pleural effusion, etc.
  • Abnormal flag notification in HL7. When an abnormality is detected, a notification needs to be sent as an abnormal flag via an HL7 ORM message in OBX 8.
  • Image analysis complete notification in HL7. This could also be used to inform the RIS that image preanalysis by the AI platform is complete, updating the status within RIS and moving the exam into the reporting worklist for the human interpreter. Output from the AI platform will be ORC 5-A, as per HL7 Table 0038, where A stands for "some but not all results available."
  • Abnormality classification in HL7. Fracture, hemorrhage, consolidation, infarct, pleural effusion, etc., should be sent in an HL7 ORM message using OBX 5.
  • Declaration/disclaimer. The AI platform should always send out a declaration that includes a list of the abnormalities evaluated by the algorithm(s) in the platform and applied to the study (e.g., for head CT, brain hemorrhage, skull fracture, brain infarct, etc.). Sensitivity and specificity of the applied algorithms for each of the abnormality should be evaluated. This declaration must be communicated as a PDF wrapped DICOM SR format, which is added as an additional series on PACS.

Other practical issues

When implementing an AI platform, users must ensure that their PACS is capable of displaying DICOM graphical information and DICOM SR (segmented region-of-interest areas and text overlay) from AI, which can be toggled on and off by the PACS viewer icon. This requires support of DICOM SR display, DICOM Presentation State, etc., standards. In addition, it should display the declaration information from the AI platform about the algorithms used on the study, with the sensitivity and specificity for each abnormality detection.

Reporting worklist capabilities to support AI workflow include triggering for reporting. Currently, reporting is triggered when the radiographer makes a status change to an exam completed on the RIS. In the future, with AI image preanalysis, the workflow will need to be altered, whereby the radiographer status completion will trigger AI platform image analysis. Once image preanalysis is complete, it will provide an outbound HL7 ORM with ORC 5-A (partial results available), which, in turn, triggers human reporting workflow within the RIS (i.e., moves the exam to the dictation worklist within the RIS).

Reporting worklist prioritization by individual reporters currently takes into account many RIS data items that are sorted and filtered by radiologists. These include referral location type (accident and emergency, inpatient, outpatient, and general practitioner), modality type (CT, MRI, digital x-ray, etc.), specialty of the referrer (ear, nose, and throat; pediatrics; gastroenterology; etc.), referrer's urgency (urgent or routine), intended reporter (work allocation by operators), and date and time of exam completion. Radiologists prioritize their work based on the session being worked. During an emergency duty session, the radiologist will filter out all the CT and MRI scans for accident and emergency and inpatient referrals.

AI decision-support tools have the potential to enhance the worklist prioritization by allowing additional data fields such as abnormalities detected by the AI algorithm. By notifying the RIS when an abnormality is seen and what type of abnormality is seen, the additional data items can be used for enhanced sorting and filtering of reporting worklists. The type of abnormality classification needs to be transmitted in standard HL7 v2 messages to help with worklist prioritization on the RIS. AI platforms should send out an HL7 ORM OBX 8 for abnormal flag notification when an abnormality is detected and OBX 5 for the type of abnormality detected. This will be stored in the RIS database and used by radiologists to prioritize reporting worklists.

Overall, there is huge potential for improving patient management decisions with image preanalysis by AI algorithms. Better detection of abnormalities such as fracture and pneumothorax and supporting the front-line emergency doctors would be invaluable. This certainly will have cost savings for public hospitals because it will enhance safety in clinical practice.

Currently, a single human reader evaluates radiology exams. Introducing a computer AI as an additional reader for all exams will certainly improve the sensitivity and specificity of the human reader. Computer algorithms can look for specific abnormalities, whilst the human readers will look at medical interpretation of the images and advise on management. Thus, there will be real collaboration between man and machine.

AI platforms can support workflow by the adoption of HL7 and DICOM standards. However, if industry is unwilling to adopt common standards for communication and collaboration for PACS, RIS, and AI platforms, then we are likely to see massive delays in implementation.

Finally, the following items are worth checking out:

Dr. Neelam Dugar is a consultant radiologist at the Doncaster and Bassetlaw Teaching Hospitals National Health Service (NHS) Foundation Trust, U.K., and informatics adviser to the Royal College of Radiologists (RCR). This article was written in her personal capacity, and her views and ideas are not necessarily shared by the RCR.

The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnieEurope.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.

Page 1 of 111
Next Page