Sign In

Communications of the ACM


Deep Learning Speeds MRI Scans

View as: Print Mobile App ACM Digital Library Full Text (PDF) In the Digital Edition Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
medical professional examines MRI

Since its invention in the 1970s, magnetic resonance imaging (MRI) has opened up a window onto the world beneath our skin. By exploiting the way the nuclei of hydrogen atoms in water and fat molecules resonate in a strong magnetic field, MRI can generate high-contrast three-dimensional images of soft body tissues, joints, and bones. MRI allows clinicians to see evidence of injury and disease within the body, ranging from torn muscle to damaged cartilage, ligaments, and tendons, as well as tumors or other disease lesions within major organs, and blood-flow blockages in the brain, all without the ionizing radiation of the X-rays used in computed tomography (CT) scans.

There is, however, a considerable usability problem with the MRI scanner as we currently know it: the technology takes far too long to acquire images, forcing patients to lie still in the confined maw of a massive magnet for up to an hour. With the observable world reduced to a halo of grayish plastic just inches from one's nose, it is a particularly tough experience for those suffering from claustrophobia. It can be disturbingly noisy, too: the scanner's magnetic components can rattle at 110 decibels or more when energized.

"It can take typically three or four minutes to acquire each magnetic resonance image, and if you're lying on the table for 30 or 40 minutes, or sometimes even an hour, depending on the type of exam, it gets hard to lie still for that entire time. It's uncomfortable for the patient, and especially so if they're a child," says Michael Recht, Louis Marx Professor and chairman of the department of radiology at NYU Langone Health.

Help is now at hand, and from an unlikely quarter. Facebook, the Palo Alto, CA-based online social network, has teamed up with radiologists at New York University's Langone Medical Center in Manhattan to develop an artificial intelligence (AI)-based imaging accelerator for MRI scanners.

What Facebook and NYU Langone have developed is a deep learning neural network (DNN) that allows MRI scans to be performed many times faster. Called FastMRI, the DNN, has been trained to generate MRI images using far less magnetic resonance data than before—and that sparse data requirement significantly accelerates the scan time.

Back to Top

Pedal To The Metal

Acceleration is, in fact, a long-held quest in MRI, says Recht's colleague, Daniel Sodickson, a professor of radiology, neuroscience, and physiology at NYU Langone Health and a specialist in biomedical imaging. He should know: in 1996, he developed an early MRI speed-up technique called parallel imaging. To understand how this works, consider the way an MRI scanner operates in general:

  • When the patient is placed in the magnetic field of the scanner's main magnet, hydrogen nuclei (protons) in the body's water and fat molecules line up with that magnetic field;
  • Three orthogonal electromagnetic coils, in the x, y and z planes, project pulsed, spatially varying magnetic field gradients into the body, making the protons momentarily change their polarization direction;
  • After each pulse projection subsides, the protons relax and line up once again with the main magnet, revealing their location by emitting distinctive radio frequencies that are picked up by a (receiver coil) detector;
  • The frequency and phase of the detected signal allows the MRI software to map the location of water and fat in the body, and to mathematically derive high contrast images from that data.

Sodickson realized the more detectors an MRI scanner could use at once, the faster an image could be computed from frequency and phase data—known as k-space data, as it is not image data in the conventional sense of being an array of pixels.

"Parallel imaging—gathering data in lots of different detectors arranged around the body—at least doubled our speed," he says.

To speed the process further, the industry has tried a technique called compressed sensing, in which algorithms inform the array of detectors which k-space data they can most probably ignore. "It's almost like pre-compressing an image with JPEG," says Sodickson.

However, it is far from perfect: compressed sensing can lead to blocky, blurred image artifacts that might confound diagnosis. What is needed is a way to learn, with much higher accuracy, which k-space data does not need to be collected by the detectors. An MRI scanner, says Recht, might project 256 magnetic gradient signals into the body to give different k-space "views" of the area under scrutiny. However, because many of the signals overlap and some view angles might be unnecessary, it is highly likely many projections might be redundant and do not need to be taken.

"It's just hard to say in advance which projections you can skip and so accelerate the scan. With deep learning, we can learn which ones you can skip and still produce the entire image," says Sodickson.

It was in 2018, while the NYU Langone team were puzzling over this issue of k-space data "undersampling," as it is called, and making their own preliminary experiments with acceleration by deep learning, that Sodickson discovered the Facebook AI Research (FAIR) lab was actively seeking projects in the "AI-for-good" arena, on which they hoped to have a positive societal impact.

Back to Top

A Battle Worth Fighting

When Sodickson and his colleagues told Facebook precisely what was needed, the Californians' ears pricked up, he recalls: "They said, 'wait a second, you want to reconstruct an image from not enough data? But you don't want that image to just be plausible, you want it to actually be true to that patient? Now that's an interesting AI problem.'"

"FastMRI uses artificial intelligence to create images from the k-space data, and we've been able to train the AI to create accurate images from undersampled k-space data."

As a result, Facebook AI Research and NYU Langone agreed to jointly develop a deep-learning-based faster MRI system—and also that the project would be open sourced on Microsoft's GitHub platform. Unlike neural networks trained to recognize an image or a voice from input data, the researchers had to craft a deep learning (DL) network capable of generating an accurate, diagnostic-quality image from an undersampled subset of an MRI scanner's acquired k-space frequency and phase data.

Initially, Facebook took NYU Langone's anonymized and open-sourced knee MRI dataset—which comprises the k-space projection data from 1,200 scans of 108 patients' knees, and the full images the MRI software resolved—and coded up a standard-issue DL model that could learn the relationship between them. "But they got lousy results," says Sodickson.

What they needed, he says, was a far more nuanced network informed by the physics of magnetic resonance. They then built a special type of DL model, called a variational network, that did not simply undergo blind ML training with k-space and image data alone: it also was trained with key information about the physics of the scanner, including mapping variations in the way receiver coil sensitivities changed across detector arrays.

To test the idea, the joint team trained its network for 155 hours using eight cloud-based GPUs, and found their new, scanner-physics-aware approach made all the difference. They found the network was able to shed three-quarters of the raw k-space data, and still allow their AI model to generate diagnostic-quality images with an almost fourfold acceleration, the Facebook/Langone team reported in the December 2020 edition of the American Journal of Roentgenology.

Better still, the images of the accelerated knee scans were judged by a jury of six senior radiologists to be of better quality than the standard-speed images. Also, in early as-yet-unreported undersampled tests on MRIs of the brain, it looks like variational DL-based scans can be accelerated between six to eight times, says Recht, while Sodickson is predicting a 10-fold improvement for some types of abdominal scan. "So if that took an hour before, it would now just be six minutes in the scanner, or if it was ten minutes, it'd now just be one minute," Sodickson says.

In getting faster, better images from less data, their result might seem counterintuitive, if not down-right magical. Yet Anuroop Sriram, a senior Facebook AI research engineer on the FastMRI project, cautions it is important to remember the scanner is not simply sampling pixels like some kind of camera, but is capturing something quite abstract: raw frequency and phase data.

The traditional way of turning k-space data into a readable scan is to apply a mathematical process called an inverse Fourier transform, which translates it from the frequency domain to a spatially resolved image. "But if you try to use that process on less than a full scan of k-space data, you don't end up with a useful image," says Sriram.

"Our FastMRI approach is creating images in a completely new way: rather than using that mathematical process, FastMRI uses artificial intelligence to create images from the k-space data, and we've been able to train the AI to create accurate images from undersampled k-space data."

Nafissa Yakubova, Facebook's AI program manager, believes the lab has hit its target of making a societal impact. "We've advanced AI to address this problem, and done so in a way that could actually one day be used in medical practice, benefiting patients, clinics, and communities," she says.

To do that, says Recht, the Langone team is beginning a multihospital study in collaboration with market-leading MRI scanner vendors Siemens, General Electric, and Philips Healthcare. The overarching aim of the study is not only proving the FastMRI DL is generalizable across musculoskeletal, knee, brain and abdomen scans, but also across multiple vendors' scanners. "Our goal is to get this as fast as possible, to as many companies as possible, so that they can make this available to patients everywhere," says Sodickson.

Others are on the FastMRI group's trail, with machine learning researchers at Imperial College London, the Korea Advanced Institute of Science & Technology (KAIST), Stanford University, and China's Shenzhen Institute of Advanced Technology all independently researching their own deep learning-based methodologies for MRI acceleration.

"Deep learning is much better than the traditional parallel imaging and compressed sensing approaches. Those classical approaches are usually based on top-down models, so if the model fails in a real acquisition scenario, image degradation is unavoidable," says Jong Chul Ye, a signal processing and ML researcher at KAIST in Daejeon, South Korea.

"Deep Learning is much better than the traditional parallel imaging and compressed sensing approaches."

The challenge now, Ye says, is to move MRI acceleration from the supervised learning the Facebook/NYU team used to train its variational model to more efficient unsupervised models. "Many groups in the imaging community, including mine, are now working on unsupervised learning approaches. This area is still quite an open one, and it's one that's going to need a lot of machine learning know-how."

* Further Reading

A large-scale dataset of both raw MRI measurements and clinical MRI images

Cha, E., Oh, G., and Ye, J. C. Geometric Approaches to Increase the Expressivity of Deep Neural Networks for MR Reconstruction IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 6, pp. 1292–1305, Oct. 2020 doi: 10.1109/JSTSP.2020.2982777

Recht, M. P., Sodickson, D. K., Knoll, F., Yakubova, N., Zitnick, C. L., et al Using Deep Learning to Accelerate Knee MRI at 3T: Results of an Interchangeability Study, American Journal of Roentgenology, December 2020, Vol. 215, No.6, pp.1421–1429

Sodickson, D. K., and Manning, W.J. Simultaneous Acquisition of Spatial Harmonics (SMASH): Fast Imaging with Radiofrequency Coil Arrays, Magnetic Resonance in Medicine, October 1997, Vol 38 (4), pp.591–603

Breaking the MRI Sound Barrier, General Electric

Back to Top


Paul Marks is a technology journalist, writer, and editor based in London, U.K.

©2021 ACM  0001-0782/21/4

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.


No entries found