PriorNet lesion segmentation in PET-CT including prior tumor appearance information Simone Bendazzoli12and Mehdi Astaraki13

2025-05-02 0 0 1.15MB 8 页 10玖币
侵权投诉
PriorNet: lesion segmentation in PET-CT
including prior tumor appearance information
Simone Bendazzoli12 and Mehdi Astaraki13
1Department of Biomedical Engineering and Health Systems , KTH Royal Institute
of Technology, Stockholm , Sweden
2Department of Clinical Science, Intervention and Technology, Karolinska Institutet,
Stockholm, Sweden
3Department of Oncology Pathology, Karolinska Institutet, Stockholm, Sweden
Abstract. Tumor segmentation in PET-CT images is challenging due
to the dual nature of the acquired information: low metabolic informa-
tion in CT and low spatial resolution in PET. U-Net architecture is the
most common and widely recognized approach when developing a fully
automatic image segmentation method in the medical field. We proposed
a two-step approach, aiming to refine and improve the segmentation per-
formances of tumoral lesions in PET-CT. The first step generates a prior
tumor appearance map from the PET-CT volumes, regarded as prior
tumor information. The second step, consisting of a standard U-Net, re-
ceives the prior tumor appearance map and PET-CT images to generate
the lesion mask. We evaluated the method on the 1014 cases available
for the AutoPET 2022 challenge, and the results showed an average Dice
score of 0.701 on the positive cases.
Keywords: Tumor segmentation ·deep-learning ·PET-CT.
1 Introduction
Tumor lesion segmentation is one of the primary tasks performed on PET-CT
scans in oncological practice. The main aim is to identify and delineate the
tumor region, enabling a quantitative assessment, performing feature extraction
and planning the treatment strategy accordingly.
U-Net model [5] training is the most common supervised deep learning ap-
proach that yielded promosing results for different medical image segmentation
tasks [2] [6]. We propose a 2-step approach, where the first step aims at gener-
ating and providing prior tumor appearance information to the second step, a
conventional U-Net employed for the tumor segmentation.
2 Method description
The proposed method consists of two main modules. First, inspired by our recent
Normal Appearance Autoencoder (NAA) model [1], the appearance of healthy
arXiv:2210.02203v1 [eess.IV] 5 Oct 2022
2 S. Bendazzoli, M. Astaraki
anatomies from PET-CT images is learned by training an inpainting model.
In specific, a Partial Convolution Neural Network (PCNN) [4] was employed
to capture the distributions of healthy anatomies. Prior information regarding
the appearance of the tumors was, then, estimated by calculating the residuals
between the reconstructed fake healthy images from the tumoral ones. Second,
the prior information highlighting the presence of candidate tumoral regions was
added as an additional channel into a supervised segmentation network in order
to guide the attention of the model to the candidate regions.
2.1 Estimating the tumor appearance
Estimating the prior information regarding the appearance of tumors can be
achieved by, first, modeling the healthy anatomies and then detecting the tumors
as anomalies. To model the distribution of complicated healthy anatomies from
whole-body PET-CT volumes, we employed a PCNN model as a robust inpaint-
ing network. This inpainting model can replace the pathological regions with the
characteristics of nearby healthy tissues and generate plausible pathology-free
images with realistic-looking and anatomically meaningful visual patterns. This
can be achieved by the following two steps: 1) forcing the model to learn the
appearance of healthy anatomies, and 2) guiding the model to inpaint only the
tumoral regions.
To learn the attributes of healthy anatomies, healthy image slices from the PET-
CT dataset were employed as the training set of the inpainting model. In specific,
more than 30000 healthy image slices were used for the training set, while the
pathological slices were employed for the testing set. Considering the large di-
versity in the shape, size, and location of the tumoral regions, random irregular
shapes were synthesized by combining regular geometrical shapes, including cir-
cles, squares, and ellipses, to corrupt the healthy images. The PCNN model is
trained until it fills the random holes and replaces the gaps with meaningful
anatomical and imagery patterns. The objective function of the PCNN model is
constructed based on several loss terms, including per-pixel loss, perceptual loss,
style loss, and total variation loss. This multi-objective optimization leverage
the quality of the inpainted images and reconstructs high-quality image while
preserving the anatomical details. The performance of the PCNN model was
evaluated by quantifying the following metrics: peak signal-to-noise ratio, mean
square error, and structural similarity index.
During the training step, the model learns to fill the random holes with the
attributes of the nearby healthy tissues. This process enforces the model to
model the distribution of the healthy anatomies. Therefore, in the test phase,
the learned model can be used to replace the tumoral regions with visual char-
acteristics of healthy tissues. However, such a tumoral removal step essentially
requires tumoral masks. While in the NAA model, a second autoencoder model
was employed to remove the tumors automatically, in this study, we utilized
the learned PCNN model to directly inpaint the tumoral regions by gaining
from the hyperintensity patterns of PET images. In specific, tumoral regions
in PET volumes often appear with higher FDG uptake with respect to nearby
摘要:

PriorNet:lesionsegmentationinPET-CTincludingpriortumorappearanceinformationSimoneBendazzoli12andMehdiAstaraki131DepartmentofBiomedicalEngineeringandHealthSystems,KTHRoyalInstituteofTechnology,Stockholm,Sweden2DepartmentofClinicalScience,InterventionandTechnology,KarolinskaInstitutet,Stockholm,Sweden...

展开>> 收起<<
PriorNet lesion segmentation in PET-CT including prior tumor appearance information Simone Bendazzoli12and Mehdi Astaraki13.pdf

共8页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:8 页 大小:1.15MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 8
客服
关注