Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma Segmentation and Koos Grade Prediction based

2025-05-06 0 0 2.01MB 10 页 10玖币
侵权投诉
Unsupervised Cross-Modality Domain
Adaptation for Vestibular Schwannoma
Segmentation and Koos Grade Prediction based
on Semi-Supervised Contrastive Learning
Luyi Han1,2, Yunzhi Huang3?, Tao Tan2( ), and Ritse Mann1,2
1Department of Radiology and Nuclear Medicine, Radboud University Medical
Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands.
2Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121,
1066 CX, Amsterdam, The Netherlands.
3School of Automation, Nanjing University of Information Science and Technology,
Nanjing 210044, China.
{taotanjs}@gmail.com
Abstract. Domain adaptation has been widely adopted to transfer styles
across multi-vendors and multi-centers, as well as to complement the
missing modalities. In this challenge, we proposed an unsupervised do-
main adaptation framework for cross-modality vestibular schwannoma
(VS) and cochlea segmentation and Koos grade prediction. We learn the
shared representation from both ceT1 and hrT2 images and recover an-
other modality from the latent representation, and we also utilize proxy
tasks of VS segmentation and brain parcellation to restrict the consis-
tency of image structures in domain adaptation. After generating missing
modalities, the nnU-Net model is utilized for VS and cochlea segmenta-
tion, while a semi-supervised contrastive learning pre-train approach is
employed to improve the model performance for Koos grade prediction.
On CrossMoDA validation phase Leaderboard, our method received rank
4 in task1 with a mean Dice score of 0.8394 and rank 2 in task2 with
Macro-Average Mean Square Error of 0.3941. Our code is available at
https://github.com/fiy2W/cmda2022.superpolymerization.
Keywords: Domain Adaptation ·Semi-supervised Contrastive Learn-
ing ·Segmentation ·Vestibular Schwnannoma.
1 Introduction
Domain adaptation has recently been employed in various clinical settings to im-
prove the applicability of deep learning approaches. The goal of Cross-Modality
Domain Adaptation (CrossMoDA) challenge 4is to segment two key brain struc-
tures, namely vestibular schwannoma (VS) and cochlea. It also requires predict-
?Luyi Han and Yunzhi Huang contributed equally to this work.
4https://crossmoda2022.grand-challenge.org/
arXiv:2210.04255v1 [cs.CV] 9 Oct 2022
2 L. Han et al.
ing the Koos grading scale for VS. The two tasks are required for the measure-
ment of VS growth and evaluation of the treatment plan (surveillance, radio-
surgery, open surgery). Although contrast-enhanced T1 (ceT1) MR imaging is
commonly used for patients with VS in diagnosis and surveillance, the research
on non-contrast imaging, such as high-resolution T2 (hrT2), is growing due to
lower risk and more efficient cost. Therefore, CrossMoDA aims to transfer the
model learned from annotated ceT1 images to unpaired and unlabeled hrT2
images based on domain adaptation.
2 Related Work
Unsupervised domain adaptation for VS and cochlea segmentation has been
extensively validated in previous research [6]. Most of them employ an image-
to-image translation method, e.g. CycleGAN [16], to generate pseudo-target do-
main images from source domain images. And then generated images and the
corresponding manual annotations are used to train the segmentation models.
Dong et al. [5] utilize NiceGAN [2], which is trained by reusing discriminators
for encoding, to improve the performance of domain adaptation and further
segmentation. Choi [3] proposes a data augmentation method by halving the in-
tensity in the tumor area for generated hrT2. Shin et al. [13] employ an iterable
self-training strategy in their method: (1) train the student model with anno-
tated generated hrT2 and pseudo-labeled real hrT2; (2) make the student a new
teacher and update the pseudo label for real htT2. Following these works, our
proposed method focuses more on extracting joint representations from multi-
modality MRIs, which can reduce the distance between different modalities in
the latent space.
Classification task for medical images is always more difficult than segmen-
tation due to fewer annotations. In recent years, contrastive learning has led to
state-of-the-art performance in self-supervised representation learning [14,1,7,10].
The key idea is to reduce the distance between an anchor and a “positive” sample
in latent space, and distinguish the anchor from other “negative” samples. Based
on this, contrastive learning can be applied to multi-modality pretraining. In or-
der to improve the sensor setup flexibility of the robot, Meyer et al. [11] propose
a multimodal approach based on contrastive learning to learn from RGB-Depth
images. Yuan et al. [15] develop an approach for joint visual-textural pretrain-
ing that focuses on both intra-modality and inter-modality learning. For medical
image analysis, Huang et al. [8] develop an attentional contrastive learning frame-
work for global and local representation learning between images and radiology
reports. Inspired by these works, contrastive learning is employed at the pre-
training phase to mine multi-modality representation strategically for different
types of samples.
摘要:

UnsupervisedCross-ModalityDomainAdaptationforVestibularSchwannomaSegmentationandKoosGradePredictionbasedonSemi-SupervisedContrastiveLearningLuyiHan1;2,YunzhiHuang3?,TaoTan2(),andRitseMann1;21DepartmentofRadiologyandNuclearMedicine,RadboudUniversityMedicalCenter,GeertGrooteplein10,6525GA,Nijmegen,The...

展开>> 收起<<
Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma Segmentation and Koos Grade Prediction based.pdf

共10页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:10 页 大小:2.01MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 10
客服
关注