CHICAGO - Computer-aided detection (CAD) software may not produce significant improvement in detecting suspicious lesions on chest radiographs, but it's not necessarily the technology's fault, according to research presented Wednesday at the 2009 RSNA meeting in Chicago.
A Dutch study team found that while CAD has the potential to improve reader performance, its utility was limited due to an inability of readers to differentiate true-positive and false-positive annotations.
"Most of the true-positive CAD annotations were rejected by the observers, and only a few were accepted by the observers," said presenter Dr. Bartjan de Hoop of University Medical Center in Utrecht, Netherlands. "Also, many of the false-positive CAD annotations are accepted by the observers."
Many lung cancers that are missed on chest radiography can be seen in retrospect, with malignancies overlooked or misinterpreted as normal structures. In response, CAD technology is being developed for use as a second reader in hopes of improving nodule detection.
Seeking to determine the effect of CAD on reader performance, researchers studied 51 patients with CT-detected and histology-proven lung cancers and 65 patients without nodules on CT to serve as controls. All were current or former heavy smokers, according to the researchers.
Four radiology residents and two experienced radiologists identified and localized potential cancers on their own and then with the use of OnGuard 5.0 CAD software (Riverain Medical, Miamisburg, OH). The readers were asked to identify and localize potential cancers, and score their confidence level on a 1 (no lesion) to 5 (definite lesion) scale.
Only correctly localized marks were considered to be true positives. Marks not matched to a lesion were considered false-positive findings. The median diameter of tumors in the study was 11.8 mm, with a range of 6.8 to 50.7 mm. Eleven (20%) of the 51 lesions were detected by none of the observers, while 10 (19.6%) were found by all readers.
CAD had standalone sensitivity of 57%, with an average of 2.4 false-positives per chest x-ray. The two radiologists had an average sensitivity of 60% without CAD and 59% with CAD; average specificity was 70% without CAD and 74% with CAD. The residents had mean sensitivity of 47% without CAD and 49% with CAD; average specificity was 50% without CAD and 55% with CAD. The changes were not statistically significant.
For radiologists, an average of 4.5 missed tumors were correctly marked by CAD; residents failed to find eight of the tumors that were true-positive marks on CAD. The radiologists accepted an average of 0.5 true-positive CAD marks and rejected an average of four true-positive CAD annotations; residents accepted 2.8 true-positive CAD marks and rejected 5.2 true-positive CAD marks.
The radiologists accepted an average of 5.5 false-positive CAD annotations, while the residents accepted 11.
The difficulty in distinguishing between true- and false-positive CAD annotations led to an inability for CAD to achieve a statistically significant impact on either sensitivity or specificity, according to de Hoop.
"The surplus of false-positive annotations limits the use of this particular CAD system, at least for subtle nodules," he said.
The average areas under the localization receiver operator characteristics (L-ROC) curve were 0.56 for radiologists without CAD and 0.55 with CAD. For residents, the areas under the L-ROC curve were 0.42 and 0.45.
By Erik L. Ridley
AuntMinnie.com staff writer
December 2, 2009
Related Reading
FDA issues draft guidance on CAD, October 22, 2009
CAD for chest x-ray detects overlooked subtle lung cancer, June 23, 2009
Computer-aided system increases detection of early-stage lung cancer, May 4, 2009
CAD provides mixed benefits for DR lung exams, March 8, 2008
DR image processing produces mixed CAD results, February 12, 2008
Copyright © 2009 AuntMinnie.com