Over 1850 Total Lots Up For Auction at Six Locations - MA 04/30, NJ Cleansweep 05/02, TX 05/03, TX 05/06, NJ 05/08, WA 05/09

New research highlights the risk of automation bias in radiology

by John R. Fischer, Senior Reporter | May 02, 2023
Artificial Intelligence Women's Health
Experienced radiologists are still more likely to make inaccurate diagnoses if AI programs label mammograms with the wrong BI-RADs classification.
Researchers in Germany and the Netherlands found that even experienced radiologists were more likely to make an inaccurate diagnosis in cases where purported AI systems were incorrect, raising concerns about automation bias.

While several studies have shown that introducing AI impairs radiologist performance in mammography workflow, this was the first to evaluate the influence of AI-based systems on the performance of accurate mammogram readings by radiologists.

In their prospective study, the researchers had 27 radiologists read 50 mammograms. One set contained 10 correctly-interpreted scans, and another contained 40 images with 12 that were assigned an incorrect BI-RADS category by an AI algorithm.
stats
DOTmed text ad

New Fully Configured 80-slice CT in 2 weeks with Software Upgrades for Life

For those who need to move fast and expand clinical capabilities -- and would love new equipment -- the uCT 550 Advance offers a new fully configured 80-slice CT in up to 2 weeks with routine maintenance and parts and Software Upgrades for Life™ included.

stats
While the researchers were not surprised to find that accuracy among inexperienced radiologists fell from 80% in cases where the AI suggested the correct BI-RADS category to less than 20% when it suggested the wrong one, they were surprised to see accuracy drop among those with more than 15 years of experience from 82% to 45.5%.

"As healthcare becomes increasingly personalized and complex, there is a possibility that radiologists could become overly reliant on AI for making diagnoses. This reliance may increase the risk of AI bias affecting their performance," lead author Dr. Thomas Dratsch, of the Institute of Diagnostic and Interventional Radiology at University Hospital Cologne, told HCB News.

Because AI is made with human input and often is trained on data from specific, limited patient populations, experts say it may incorporate biases that lead to inaccuracies. In fact, in a recent study, 60% of Americans said they would not feel comfortable with AI diagnosing or recommending treatments for diseases because of these reasons, as well as not being deeply familiar with the technology and how it works.

The researchers say that potential safeguards such as showing users the confidence levels of the decision support system, or teaching them the reasoning behind these systems’ decisions may help reduce automation bias by making human radiologists feel more accountable for their own decisions.

Additionally, they are looking to study the use of tools such as eye-tracking technology, which Dratsch says can help clinicians better understand how radiologists using AI integrate the information they collect with it into their decision-making processes.

"By comparing these eye movements to those of radiologists not using AI, we can better understand the potential pitfalls of automation bias and develop effective methods for presenting AI output that encourages critical engagement while minimizing the risks associated with bias," he said.

The authors are affiliated with the University of Cologne, University Clinic Schleswig-Holstein, University Medical Centre of the Johannes Gutenberg-University Mainz, and University Clinic Würzburg in Germany; and Vrije Universiteit Amsterdam in The Netherlands.

The findings were published in Radiology.

The authors did not respond for comment.

You Must Be Logged In To Post A Comment