Motion correction for brain MRI using deep learning and a novel hybrid loss function Lei Zhang1 Xiaoke Wang1 Michael Rawson2 Radu Balan3 Edward H. Herskovits1 Elias

2025-05-02 0 0 1.42MB 30 页 10玖币
侵权投诉
Motion correction for brain MRI using deep learning and a novel
hybrid loss function
Lei Zhang1, Xiaoke Wang1, Michael Rawson2, Radu Balan3, Edward H. Herskovits1, Elias
Melhem1, Linda Chang1, Ze Wang1, and Thomas Ernst1
1Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, MD, USA
2Pacific Northwest National Laboratory
3Department of Mathematics and Center for Scientific Computation and Mathematical Modeling,
University of Maryland, College Park, MD, USA,
Corresponding Authors
Thomas Ernst, Ph.D.
Department of Diagnostic Radiology and Nuclear Medicine
University of Maryland School of Medicine
670 W. Baltimore Street, HSF-III, Room 1130,
Baltimore, MD 21202
Email: ternst@som.umaryland.edu
Ze Wang, Ph.D.
Department of Diagnostic Radiology and Nuclear Medicine
University of Maryland School of Medicine
670 W. Baltimore Street, HSF-III, Room 1130,
Baltimore, MD 21202
Email: ze.wang@som.umaryland.edu
Figure and Tables: 9 Figures and 1 Table (1 supplementary table)
Key words: MRI, motion correction, deep learning, brain
Abstract
Purpose
To develop and evaluate a deep learning-based method (MC-Net) to suppress motion artifacts in
brain magnetic resonance imaging (MRI).
Methods
MC-Net was derived from a UNet combined with a two-stage multi-loss function. T1-weighted
axial brain images contaminated with synthetic motions were used to train the network.
Evaluation used simulated T1 and T2-weighted axial, coronal, and sagittal images unseen during
training, as well as T1-weighted images with motion artifacts from real scans. Performance
indices included the peak signal to noise ratio (PSNR), structural similarity index measure
(SSIM), and visual reading scores. Two clinical readers scored the images.
Results
The MC-Net outperformed other methods implemented in terms of PSNR and SSIM on the T1
axial test set. The MC-Net significantly improved the quality of all T1-weighted images (for all
directions and for simulated as well as real motion artifacts), both on quantitative measures and
visual scores. However, the MC-Net performed poorly on images of untrained contrast (T2-
weighted).
Conclusion
The proposed two-stage multi-loss MC-Net can effectively suppress motion artifacts in brain
MRI without compromising image context. Given the efficiency of the MC-Net (single image
processing time ~40ms), it can potentially be used in real clinical settings. To facilitate further
research, the code and trained model are available at https://github.com/MRIMoCo
/DL_Motion_Correction.
Introduction
Magnetic resonance imaging (MRI) is a widely used medical imaging modality due to its
ability of visualizing both the anatomy and function of tissues and organs as well as pathologic
processes1. MRI provides high spatial resolution and diverse contrasts, making it superior to
many other imaging modalities for detecting and characterizing soft tissue (e.g., brain, abdominal
organs, and blood vessels) and pathologies.
Because of the sequential spatial encoding steps used to spatially encode the imaging object,
MRI is relatively slow and can take up to several minutes for a typical 3D volume scan. The
prolonged image acquisition process makes MRI sensitive to motion1,2. Unfortunately, motion in
human subjects is inevitable and can be caused by involuntary physiological movements, such as
respiration and cardiac motion, and unintended patient movements. Motion-induced image
artifacts can drastically deteriorate image quality and reduce diagnostic accuracy2. For example,
Andre et al reported that almost 60% of 192 clinical brain MRI scans were contaminated with
motion artifacts3. Among these, 28% were marginally diagnostic to non-diagnostic and should be
repeated. Because of the motion-induced image artifacts, the annual loss of revenues per MR
scanner can be over $100,000 for brain studies alone3.
A range of prospective correction strategies have been developed for motion artifacts 47 8,
but they commonly have limitations such as scanner platform accessibility, applicability to
specific MR imaging sequences, and limitations in correcting different types of motion artifacts
(e.g., in-plane versus through-plane movements). Therefore, retrospective motion-correction by
means of post-processing provides a good complement. One promising approach involves deep
learning (DL)1,2,916,17,18, using deep convolutional neural networks (DCNN) or other network
architectures with supervised learning. Given sufficient training pairs (inputs and reference
images), DCNNs can be trained to learn the transformation from the input (motion-corrupted
image) to the reference (motion-free image). Trained DCNNs have been used successfully to
solve many challenging and clinically important problems, e.g., arterial spin labeling perfusion
MRI denoising 13,19, image segmentation 20,21, and image registration 22,23.
DCNNs appear to be well-suited for retrospective correction of motion artifacts since there
are no obvious conventional algorithms to solve the problem and yet expert readers can “read
through” the artifacts to some degree. Recent studies demonstrate that DCNNs can be used to
attenuate motion artifacts in brain MRI scans using a data-driven approach without prior
knowledge. For instance, variation auto encoders (VAE) and generative adversarial networks
(GAN) were implemented for retrospective correction of rigid and non-rigid motion artifacts
from motion-affected MR scans1. GANs were also used for motion correction in multi-shot MRI
12. A conditional GAN improved the image quality of predicted motion-corrected images
compared to motion-corrupted images24. Finally, an encoder-decoder network was able to
suppress motion artifacts with motion simulation augmentation2.
The purpose of this study was to implement and comprehensively evaluate a new deep
neural network architecture and loss function for motion correction. The methodology and scope
of this study are different from previous studies. The novel and unique contributions of this paper
include: First, a new loss function was proposed, which contained an L1 component for
penalizing overall image artifacts and a total variation component to penalize the loss of image
details such as boundaries. Accordingly, a two-stage training strategy was implemented to first
minimize the overall motion-artifacts and then consider both the residual motion-induced
artifacts and the loss of image details such as boundaries. Second, the generalizability of the
trained model was assessed using images with different contrast than that of the training data.
Third, to ensure rigor and demonstrate clinical utility, substantial evaluations were made using
different levels of synthetic motions and in-vivo data through both objective performance indices
and subjective reading by experienced clinicians. Motion-free images were also used to assess
potential over-corrections by the trained DL networks. Finally, to allow other researchers to
reproduce our work or use the methods to process their own data, we have released the code and
sample data at https://github.com/MRIMoCo/DL_Motion_Correction.
Methods
2.1 MC-Net
The proposed deep learning-based method (MC-Net) takes a motion‐corrupted image as
input and outputs a motion‐corrected image. The method implements a modified UNet (Figure 1)
as its neural-network structure. The MC-Net was trained with a novel two-stage training strategy
using a hybrid loss function L that combines a L1-loss and Total Variation loss (TV) 25:
        (1)
    
 (2)
         
 (3)
where I and I0 are a corrupted image and a motion free image, and i and j are row/column
indices. During the first training stage, we used the L1-loss only [(alpha, beta) = (1, 0)] to
suppress overall motion-induced artifacts. The pre-trained stage 1 model is then fine-tuned in
stage 2 by turning on the TV-loss component [(alpha, beta) = (1, 1)]; this penalizes boundary
artifacts in addition to the overall artifacts.
2.2 Motion Corrupted Images
The pipeline that generates motion-corrupted k-space data is shown in Figure 2. The project
used images with simulated motion artifacts, based on deidentified brain MRI scans from 52
human subjects (50 male, 2 female, age 48.6 ± 9.1 years) previously enrolled in research
studies. All data were acquired on a 3T scanner (TIM Trio, Siemens Healthcare, Erlangen,
Germany). The ability of the MC-Net to correct real (non-simulated) images with motion
artifacts was assessed using motion-corrupted scans from five additional subjects (2 male, 3
female, age 19±4.9).
The source images were 3D sagittal magnetization-prepared rapid gradient-echo (MP-
RAGE) scans and 2D axial fluid-attenuated inversion-recovery (FLAIR) scans from 52 subjects.
MP-RAGE images were collected with the following parameters: TR=2.2s; TE=4.47ms; TI=1s;
resolution=1mm isotropic; matrix size=256×256×160, and FLAIR images were collected with
the following parameters: TR=9.1s; TE=84ms; echo train length=11; matrix size=256×204; in-
plane resolution = 1mm2; slice thickness=3mm; slice spacing=3mm; TI=2.5s. All source images
were assessed visually to ensure they did not contain motion artifacts.
Forty-two axial in-plane motion trajectories of 256 temporal samples each were synthesized
from in-vivo head movements measured with the prospective acquisition correction (PACE)
algorithm27 during BOLD functional MRI (fMRI) scans. The source motion trajectories had
translations <2mm and rotations <2°, and were subsequently multiplied by eight and reduced
from 6 degrees of freedom to in-plane motions (3 degrees of freedom). The offsets of the
trajectories were normalized such that the motions in all axes are zero at the center of the
trajectory. The severity of each motion trajectory applied is indicated by the motion standard
deviation across time for all three in-plane degrees of freedom (L2-norm of in-plane translations
in mm and in-plane rotation in degree).
摘要:

MotioncorrectionforbrainMRIusingdeeplearningandanovelhybridlossfunctionLeiZhang1,XiaokeWang1,MichaelRawson2,RaduBalan3,EdwardH.Herskovits1,EliasMelhem1,LindaChang1,ZeWang1,andThomasErnst11DepartmentofDiagnosticRadiologyandNuclearMedicine,UniversityofMarylandSchoolofMedicine,Baltimore,MD,USA2PacificN...

收起<<
Motion correction for brain MRI using deep learning and a novel hybrid loss function Lei Zhang1 Xiaoke Wang1 Michael Rawson2 Radu Balan3 Edward H. Herskovits1 Elias.pdf

共30页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:30 页 大小:1.42MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 30
客服
关注