In an MRI, it may be easy or hard to scan someone, depending on their physical health, but everyone still has to breathe
Ulugbek Kamilov
Deep learning learns directly from the training data how to determine the signal from artifacts and noise, or variations in signal intensity in an image. Many existing deep learning-based MRI reconstruction methods are able to remove artifacts and noise but they learn from a ground truth reference, which can be difficult to obtain. “In an MRI, it may be easy or hard to scan someone, depending on their physical health, but everyone still has to breathe,” Kamilov said. “When they breathe, their internal organs move, and we have to determine how to correct for those movements.”
In Phase2Phase, the team feeds the deep learning model with only sets of bad images and trains the neural network to predict a good image from a bad one without a ground truth reference. Weijie Gan, a doctoral student in Kamilov’s lab and a co-first author on the paper, wrote the software for Phase2Phase to remove noise and artifacts. Cihat Eldeniz, an instructor of radiology at the Mallinckrodt Institute of Radiology and co-first author, worked on the MRI acquisition and motion detection used in the study. They modeled Phase2Phase after an existing machine learning method known as Noise2Noise, which restores images without clean data.
In a retrospective study, the team evaluated MRI data from 33 participants — 15 healthy persons and 18 patients with liver cancer, all of whom were allowed to breathe normally while in the scanner. These results were compared with images reconstructed with another deep learning method, UNet3DPhase, which is trained on a high-quality ground truth; compressed sensing; and multicoil nonuniform fast Fourier transform (MCNUFFT). In addition, this Phase2Phase method has successfully reconstructed 66 MRI data sets acquired at another institution using different acquisition parameters, demonstrating its broad applicability.
Source: Healthcare in Europe