Over 90 Total Lots Up For Auction at One Location - WA 04/08

Jurors may be less likely to find physicians liable for harm from AI than commonly thought

by Robin Lasky, Contributing Reporter | January 22, 2021
Artificial Intelligence Business Affairs
One reason physicians might be reluctant to utilize artificial intelligence is for fear of being found liable by a jury in the event that something were to go wrong, but new research published in the Journal of Nuclear Medicine suggests those concerns may not be as warranted as commonly believed.

"Many such cases would never reach a jury, but for one that did, the answer depends on the views and testimony of medical experts and the decision-making of lay juries," said Kevin Tobia, assistant professor of law at the Georgetown University Law Center in Washington D.C. said in a statement about his team’s results. "Our study is the first to focus on that last aspect, studying potential jurors' attitudes about physicians who use AI.”

In this online experimental research study, hypotheticals were provided to a U.S. representative sample of 2,000 adults, which involved a physician choosing to follow or disregard a drug dosage recommendation provided by an AI system. In each scenario, the physician’s decision resulted in harm to the patient.

Participants were then asked for their opinion about whether the medical decision in each case was one that could have been made by “most physicians” or “a reasonable physician” faced with similar circumstances. Participant responses were used to measure whether they were more or less inclined to determine that the physician’s conduct fell within or outside of Standard Care — the legal basis for medical malpractice liability. Scenarios varied as to whether the physician chose to adopt or disregard standard versus non-standard care advice from the AI.

Based on participant responses, the researchers concluded that when AI recommends advice that is in line with standard care a physician can reduce their exposure to liability by accepting it, but that rejecting non-standard care advice from an AI in favor of a decision based on standard care, does not provide a physician with a similar advantage.

“These results suggest that, at least with respect to potential jurors and lay understanding, the use of AI might be closer to the standard of care than we might think,” the authors observed in an invited perspective in JNM. “Following the advice of AI already reduces the risk of liability for injury that results from deviations from the standard of care. The contrary is not yet true though: deviating from nonstandard AI recommendations is not yet viewed as subjecting a physician to liability on its own.”

The researchers say their study contradicts prior research that has indicated jurors are mistrustful of medical decisions based on AI recommendations, demonstrating that potential jurors may be more accepting of the integration of this technology into medical practice than previously thought. However, these results may raise further questions as to to what extent AI is distinct from any other confirmatory medical tool when it comes to the way jurors assess liability, and as to what impact increasing acceptance of AI may have on medical decision making.

The FDA has also been directing more of its attention toward AI, and what methods it will use to evaluate new algorithms. Last week, the agency released its first Artificial Intelligence/Machine Learning-Based Software as a Medical Device (SaMD) Action Plan to monitor the safety and effectiveness of AI and ML-based medical software modifications.

You Must Be Logged In To Post A Comment