Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEs Prashant Pandey10000000265949685 Aleti Vardhan2 Mustafa Chasmai1

2025-04-27 0 0 2.29MB 14 页 10玖币
侵权投诉
Adversarially Robust Prototypical Few-shot
Segmentation with Neural-ODEs
Prashant Pandey?1[0000000265949685], Aleti Vardhan?2, Mustafa Chasmai1,
Tanuj Sur3, and Brejesh Lall1
1Indian Institute of Technology Delhi, India
2Manipal Institute of Technology, India
3Chennai Mathematical Institute, India
getprashant57@gmail.com
Abstract. Few-shot Learning (FSL) methods are being adopted in set-
tings where data is not abundantly available. This is especially seen in
medical domains where the annotations are expensive to obtain. Deep
Neural Networks have been shown to be vulnerable to adversarial at-
tacks. This is even more severe in the case of FSL due to the lack of a
large number of training examples. In this paper, we provide a framework
to make few-shot segmentation models adversarially robust in the med-
ical domain where such attacks can severely impact the decisions made
by clinicians who use them. We propose a novel robust few-shot seg-
mentation framework, Prototypical Neural Ordinary Differential Equa-
tion (PNODE), that provides defense against gradient-based adversarial
attacks. We show that our framework is more robust compared to tradi-
tional adversarial defense mechanisms such as adversarial training. Ad-
versarial training involves increased training time and shows robustness
to limited types of attacks depending on the type of adversarial exam-
ples seen during training. Our proposed framework generalises well to
common adversarial attacks like FGSM, PGD and SMIA while having
the model parameters comparable to the existing few-shot segmentation
models. We show the effectiveness of our proposed approach on three
publicly available multi-organ segmentation datasets in both in-domain
and cross-domain settings by attacking the support and query sets with-
out the need for ad-hoc adversarial training.
Keywords: Few-shot Segmentation·Neural-ODE·Adversarial Robust-
ness.
1 Introduction
Modern day safety-critical medical systems are vulnerable to different kinds of
attacks that can cause danger to life. With the penetration of AI, Machine Learn-
ing and Deep Neural models to healthcare and medical systems, it is imperative
?equal contribution
arXiv:2210.03429v1 [cs.CV] 7 Oct 2022
2 Pandey et al.
to make such models robust against different kinds of attacks. By design, these
models are data-hungry and need a significant amount of labelled data to improve
their performance and generalizability. Past studies have shown that it is not al-
ways feasible to annotate medical data, especially for segmentation problems due
to the huge time and specific skills it needs to do so. Lack of well-annotated data,
make these models vulnerable to different kind of attacks like adversarial white
and black box attacks [2,5,13] on Deep Neural models. ML practitioners employ
FSL [7,1] to learn patterns using well-annotated base classes, finally to transfer
the knowledge to scarcely annotated novel classes. This knowledge transfer is
severely impacted in the presence of adversarial attacks when support and query
samples from novel classes are injected with adversarial noise [29].
Commonly used Adversarial Training mechanisms [2,13,17] require adversar-
ially perturbed examples shown to the model during training. [34] introduced
standard adversarial training (SAT) procedure for semantic segmentation. These
methods do not guarantee defense when the type of attack is different from
the adversarially perturbed examples [18,30] and it is impractical to expose the
model with different kind of adversarial examples during training itself. Also,
a common method that handles attacks both on support and query examples
of novel classes, is non-existent. To the best of our knowledge, the adversar-
ial attacks on few-shot segmentation (FSS) with Deep Neural models and their
defense mechanisms have not yet been explored and the need for such robust
models is inevitable. To this end, we propose Prototypical Neural Ordinary
Differential Equation (PNODE), a novel prototypical few-shot segmentation
framework based on Neural-ODEs [14] that provides defense against different
kinds of adversarial attacks in different settings. Owing to the fact that the
integral curves of Neural-ODEs are non-intersecting, adversarial perturbations
in the input lead to small changes in the output as opposed to existing FSS
models where the output is unpredictable. In this paper, we make the following
contributions:
- We extend SAT for FSS task to handle attacks on both support and query.
- We propose a novel adversarially robust FSS framework, PNODE, that
can handle different kinds of adversarial attacks like FGSM [2], PGD [13] and
SMIA [33] differing in intensity and design, even without an expensive adversarial
training procedure.
- We show the effectiveness of our proposed approach with publicly avail-
able multi-organ segmentation datasets like BCV [3], CT-ORG [25] and DE-
CATHLON [23] for both in-domain and cross-domain settings on novel classes.
2 Related Works
Neural ODEs: Deep learning models such as ResNets [4] learn a sequence of
transformation by mapping input xto output yby composing a sequence of
transformations to a hidden state. In a ResNet block, computation of a hid-
den layer representation can be expressed using the following transformation:
h(t + 1) = h(t) + fθ(h(t),t) where t∈ {0,...,T}and h: [0,]Rn. As the
Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEs 3
number of layers are increased and smaller steps are taken, in the limit, the
continuous dynamics of the hidden layers are parameterized using an ordinary
differential equation (ODE) [14] specified by a neural network dh(t)
dt=fθ(h(t),t)
where f:Rn×[0,]Rndenotes the non-linear trainable layers parameter-
ized by weights θand hrepresents the n-dimensional state of the Neural-ODE.
These layers define the relation between the input h(0) and output h(T), at time
T>0, by providing solution to the ODE initial value problem at terminal time
T. Neural-ODEs are the continuous equivalent of ResNets where the hidden lay-
ers can be regarded as discrete-time difference equations.
Recent studies [27,28,31] have applied Neural-ODEs to defend against adversar-
ial attacks. [27] proposes time-invariant steady Neural-ODE that is more stable
than conventional convolutional neural networks (CNNs) in the classification
setting.
Few-shot Learning: FSL methods seek good generalization and learn trans-
ferable knowledge across different tasks with limited data [1,20,21]. Few-shot
segmentation (FSS) [24,19,26] aims to perform pixel-level classification for novel
classes in a query image when trained on only a few labelled support images.The
commonly adopted approach for FSS is based on prototypical networks [6,19,32]
that employ prototypes to represent typical information for foreground objects
present in the support images. In addition to prototype-based setting, [24] in-
corporates ‘squeeze & excite’ blocks that avoids the need of pre-trained models
for medical image segmentation. [26] uses a relation network [12] and intro-
duced FSS-1000 dataset that is significantly smaller as compared to contempo-
rary large-scale datasets for FSS.
Adversarial robustness: Adversarial attacks for natural image classification
has been extensively explored. FGSM [2] and PGD [13] generate adversarial ex-
amples based on the CNN gradients. Besides image classification, several attack
methods have also been proposed for semantic segmentation [9,10,33,22]. [10]
introduced Dense Adversary Generation (DAG) that optimizes a loss function
over a set of pixels for generating adversarial perturbations. [15] studied the
effects of adversarial attacks on brain segmentation and skin lesion classifica-
tion. Recently, [33] proposes an adversarial attack (SMIA) for images in medical
domain that employs a loss stabilization term to exhaustively search the pertur-
bation space. While adversarial attacks expose the vulnerability of deep neural
networks, adversarial training [13,2,8] is effective in enhancing the target model
by training it with adversarial samples. However, none of existing methods have
explored SAT procedure for few-shot semantic segmentation.
3 Proposed Method
The objective is to build a FSS model robust to various gradient-based attacks
on support and query images. Our methodology focuses on two aspects. First,
we extend SAT as a defense mechanism. Second, we propose our framework,
PNODE, which alleviates existing limitations faced by SAT.
摘要:

AdversariallyRobustPrototypicalFew-shotSegmentationwithNeural-ODEsPrashantPandey?1[0000000265949685],AletiVardhan?2,MustafaChasmai1,TanujSur3,andBrejeshLall11IndianInstituteofTechnologyDelhi,India2ManipalInstituteofTechnology,India3ChennaiMathematicalInstitute,Indiagetprashant57@gmail.comAbstract.Fe...

展开>> 收起<<
Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEs Prashant Pandey10000000265949685 Aleti Vardhan2 Mustafa Chasmai1.pdf

共14页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:14 页 大小:2.29MB 格式:PDF 时间:2025-04-27

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 14
客服
关注