Over 1850 Total Lots Up For Auction at Six Locations - MA 04/30, NJ Cleansweep 05/02, TX 05/03, TX 05/06, NJ 05/08, WA 05/09

Does explaining AI decision-making mitigate bias and improve accuracy?

by John R. Fischer, Senior Reporter | January 02, 2024
Artificial Intelligence
Explanations behind biased AI algorithms' reasoning may not prevent these models from reducing the accuracy of clinicians in treating patients.
According to a new study, even when presented with an explanation for why an AI model makes certain decisions, clinicians are more or less at the same risk of being unable to identify if the model is biased and that their accuracy may be compromised.

Researchers at the University of Michigan found that biased AI models reduced accuracy by 11.3% and that explanations for these models’ reasoning did not help providers recover in performance. This correlates with previous findings, which suggest that these models may deceive users.

“The problem is that the clinician has to understand what the explanation is communicating and the explanation itself,” said first author Sarah Jabbour, a Ph.D. candidate in computer science and engineering in the College of Engineering at the University of Michigan, in a statement.

Jabbour and her colleagues evaluated the diagnostic accuracy of 457 hospitalist physicians, nurse practitioners, and physician assistants. Half were provided with an explanation for their AI model’s reasoning, and the other half were not. All were instructed to make treatment recommendations based on AI diagnoses in real clinical cases of respiratory failure, with the AI model indicating if the patient had pneumonia, heart failure, or chronic obstructive pulmonary disease.

Using the model, the accuracy for those without an explanation grew 2.9%. Those with one were provided a headmap or visual representation of where the AI model was looking in the chest X-ray as its basis for the diagnosis. Accuracy for these clinicians increased 4.4%.

The team also presented clinicians with models intentionally trained to be biased and found that even when explaining how models made certain decisions, such as basing diagnoses on non-relevant data, their accuracy did not recover after decreasing.

According to the authors, AI models can be designed with shortcuts and false correlations in training data that create these biases. For example, a data set that shows women being underdiagnosed with heart failure may lead an AI model to infer an association between being biologically female and a lower risk for heart failure. As a result, relying on these models can amplify existing biases, making it necessary to create effective explanations that help clinicians become aware of, understand, and correct inaccurate model reasoning to mitigate risks.

“There’s still a lot to be done to develop better explanation tools so that we can better communicate to clinicians why a model is making specific decisions in a way that they can understand. It’s going to take a lot of discussion with experts across disciplines,” said Jabbour.

She adds that the team hopes this will encourage more research into the safe implementation of AI in healthcare for all patient populations and medical education on bias within AI.

The findings were published in JAMA.

You Must Be Logged In To Post A Comment