AI thought knee X-rays show if you drink beer—they don’t


drinking beer
Credit: Unsplash/CC0 Public Domain

Artificial intelligence can be a useful tool to health care professionals and researchers when it comes to interpreting diagnostic images. Where a radiologist can identify fractures and other abnormalities from an X-ray, AI models can see patterns humans cannot, offering the opportunity to expand the effectiveness of medical imaging.

But a study in Scientific Reports highlights a hidden challenge of using AI in research—the phenomenon of highly accurate yet potentially misleading results known as “shortcut learning.”

The researchers analyzed more than 25,000 knee X-rays and found that AI models can “predict” unrelated and implausible traits such as whether patients abstained from eating refried beans or beer. While these predictions have no medical basis, the models achieved surprising levels of accuracy by exploiting subtle and unintended patterns in the data.

“While AI has the potential to transform medical imaging, we must be cautious,” says the study’s senior author, Dr. Peter Schilling, an at Dartmouth Health’s Dartmouth Hitchcock Medical Center and an assistant professor of orthopaedics in Dartmouth’s Geisel School of Medicine.

“These models can see patterns humans cannot, but not all patterns they identify are meaningful or reliable,” Schilling says. “It’s crucial to recognize these risks to prevent misleading conclusions and ensure scientific integrity.”

AI thought knee X-rays show if you drink beer—they don't: Researchers show how AI use in medical imaging can go wrong
QUADAS-2 summary plots. Credit: npj Digital Medicine (2021). DOI: 10.1038/s41746-021-00438-z

The researchers examined how AI algorithms often rely on confounding variables—such as differences in X-ray equipment or clinical site markers—to make predictions rather than medically meaningful features. Attempts to eliminate these biases were only marginally successful—the AI models would just “learn” other hidden data patterns.

“This goes beyond bias from clues of race or gender,” says Brandon Hill, a co-author of the study and a machine learning scientist at Dartmouth Hitchcock. “We found the algorithm could even learn to predict the year an X-ray was taken. It’s pernicious—when you prevent it from learning one of these elements, it will instead learn another it previously ignored. This danger can lead to some really dodgy claims, and researchers need to be aware of how readily this happens when using this technique.”

The findings underscore the need for rigorous evaluation standards in AI-based . Overreliance on standard algorithms without deeper scrutiny could lead to erroneous clinical insights and treatment pathways.

“The burden of proof just goes way up when it comes to using models for the discovery of new patterns in medicine,” Hill says. “Part of the problem is our own bias. It is incredibly easy to fall into the trap of presuming that the model ‘sees’ the same way we do. In the end, it doesn’t.”

“AI is almost like dealing with an alien intelligence,” Hill continues. “You want to say the model is ‘cheating,” but that anthropomorphizes the technology. It learned a way to solve the task given to it, but not necessarily how a person would. It doesn’t have logic or reasoning as we typically understand it.”

Schilling, Hill, and study co-author Frances Koback, a third-year in Dartmouth’s Geisel School, conducted the study in collaboration with the Veterans Affairs Medical Center in White River Junction, Vt.

More information:
Ravi Aggarwal et al, Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis, npj Digital Medicine (2021). DOI: 10.1038/s41746-021-00438-z

Provided by
Dartmouth College


Citation:
AI thought knee X-rays show if you drink beer—they don’t (2024, December 11)
retrieved 12 December 2024
from https://medicalxpress.com/news/2024-12-ai-thought-knee-rays-beer.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.







Source link

Leave a Reply

Your email address will not be published. Required fields are marked *