Over 150 New York Auctions End Tomorrow 04/19 - Bid Now
Over 1050 Total Lots Up For Auction at Two Locations - MA 04/30, NJ Cleansweep 05/02

From one brain scan, more information for medical artificial intelligence

Press releases may be edited for formatting or style | June 20, 2019 Alzheimers/Neurology Artificial Intelligence
CAMBRIDGE, MA -- MIT researchers have devised a novel method to glean more information from images used to train machine-learning models, including those that can analyze medical scans to help diagnose and treat brain conditions.

An active new area in medicine involves training deep-learning models to detect structural patterns in brain scans associated with neurological diseases and disorders, such as Alzheimer's disease and multiple sclerosis. But collecting the training data is laborious: All anatomical structures in each scan must be separately outlined or hand-labeled by neurological experts. And, in some cases, such as for rare brain conditions in children, only a few scans may be available in the first place.

In a paper presented at the recent Conference on Computer Vision and Pattern Recognition, the MIT researchers describe a system that uses a single labeled scan, along with unlabeled scans, to automatically synthesize a massive dataset of distinct training examples. The dataset can be used to better train machine-learning models to find anatomical structures in new scans -- the more training data, the better those predictions.
stats
DOTmed text ad

We repair MRI Coils, RF amplifiers, Gradient Amplifiers and Injectors.

MIT labs, experts in Multi-Vendor component level repair of: MRI Coils, RF amplifiers, Gradient Amplifiers Contrast Media Injectors. System repairs, sub-assembly repairs, component level repairs, refurbish/calibrate. info@mitlabsusa.com/+1 (305) 470-8013

stats
The crux of the work is automatically generating data for the "image segmentation" process, which partitions an image into regions of pixels that are more meaningful and easier to analyze. To do so, the system uses a convolutional neural network (CNN), a machine-learning model that's become a powerhouse for image-processing tasks. The network analyzes a lot of unlabeled scans from different patients and different equipment to "learn" anatomical, brightness, and contrast variations. Then, it applies a random combination of those learned variations to a single labeled scan to synthesize new scans that are both realistic and accurately labeled. These newly synthesized scans are then fed into a different CNN that learns how to segment new images.

"We're hoping this will make image segmentation more accessible in realistic situations where you don't have a lot of training data," says first author Amy Zhao, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and Computer Science and Artificial Intelligence Laboratory (CSAIL). "In our approach, you can learn to mimic the variations in unlabeled scans to intelligently synthesize a large dataset to train your network."

There's interest in using the system, for instance, to help train predictive-analytics models at Massachusetts General Hospital, Zhao says, where only one or two labeled scans may exist of particularly uncommon brain conditions among child patients.

You Must Be Logged In To Post A Comment