Dynamic Stochastic Ensemble with Adversarial Robust Lottery Ticket Subnetworks Qi Peng1Wenlin Liu2Ruoxi Qin1Libin Hou1Bin Yan1Linyuan Wang1

2025-05-03 0 0 815.43KB 6 页 10玖币
侵权投诉
Dynamic Stochastic Ensemble with Adversarial
Robust Lottery Ticket Subnetworks
Qi Peng,1Wenlin Liu,2Ruoxi Qin,1Libin Hou,1Bin Yan,1Linyuan Wang1
1PLA Strategy Support Force Information Engineering University
2University of Science and Technology of China (USTC)
pqmailkwiki@163.com, wanglinyuanwly@163.com
Abstract
Adversarial attacks are considered the intrinsic vulnerability of CNNs. Defense
strategies designed for attacks have been stuck in the adversarial attack-defense
arms race, reflecting the imbalance between attack and defense. Dynamic Defense
Framework (DDF) recently changed the passive safety status quo based on the
stochastic ensemble model. The diversity of subnetworks, an essential concern in
the DDF, can be effectively evaluated by the adversarial transferability between
different networks. Inspired by the poor adversarial transferability between sub-
networks of scratch tickets with various remaining ratios, we propose a method
to realize the dynamic stochastic ensemble defense strategy. We discover the
adversarial transferable diversity between robust lottery ticket subnetworks drawn
from different basic structures and sparsity. The experimental results suggest that
our method achieves better robust and clean recognition accuracy by adversarial
transferable diversity, which would decrease the reliability of attacks.
1 Introduction
Deep neural networks (DNNs) currently define state-of-the-art performance in standard image
classification tasks. However, Szegedy et al. proposed that the advanced classifiers may be fooled
by an imperceptible perturbation called adversarial samples[
1
]. It raises concern about the intrinsic
vulnerability of DNNs[2, 3].
The defenders try hard to gain the initiative in the adversarial arms race to resist the rapid development
of adversarial attacks. Researchers propose many empirical and certified defense methods to obtain
robust networks. The certified defense methods are supported by rigorous theoretical security
guarantees, which could steadily expand the robustness radius. However, transferring it to large
datasets is not accessible due to the high computational cost[
4
]. And adversarial training is currently
the most flexible and effective empirical defense method by enhancing the training set with adversarial
samples generated dynamically[
5
]. Nevertheless, previous research has testified the widespread
transferability of adversarial examples[
6
,
7
,
8
]. And there is a demand for implicit transferability in
further research because adversarial training depends on specific attack algorithms for augmented
data[
9
], which makes the defender hard to and appear passive in the arms race. On the contrary,
a rational ensemble strategy is an effective defense method in practice[
10
,
11
]. A recent study
presents the dynamic defense framework (DDF) based on the stochastic ensemble[
12
]. The DDF
would change the ensemble states based on variable model attributes of the architecture and smooth
parameters. It expects heterogeneous candidate models to ensure diverse ensemble statuses.
We propose the dynamic stochastic ensemble with adversarial robust lottery ticket subnetworks.
Based on the Lottery Hypothesis[
13
], Fu et al. discover the subnetworks with inborn robustness. It
Corresponding author.
Preprint. Under review.
arXiv:2210.02618v1 [cs.CV] 6 Oct 2022
matches or surpasses the adversarially trained networks with the same structure without any model
training.[
14
]. Inspired by Fu et al., our method obtained subnetworks with different network structures
and remaining ratios to promote the adversarial transferable diversity for the DDF. By weakening the
transferability between ensemble states, we improve the initiative of the DDF against the adversary.
2 Method
In the framework of dynamic defense, we represent the dynamic stochastic ensemble with adversarial
robust lottery ticket subnetworks. [
14
] proved the poor adversarial transferability between the scratch
tickets under a single structure. Drawing inspiration from prior works, we further explore the
adversarial transferable diversity from the different fundamental structures and remaining ratios.
2.1 The Dynamic Defense Framework and Adversarial Transferable Diversity
The DDF is a randomized defense strategy to protect ensemble gradient information, and the essential
requirements for it are randomness and diversity to promote the ensemble’s adversarial robustness[
12
].
It presents a model ensemble defense method with randomized network parameter distribution
specialty, which causes an unknowable act of the defender. The output of dynamic stochastic
ensemble model fens containing Inumber of models is defined as follows:
fens(x, θ) =
I
X
i=1
f(x, θ)(1)
The randomness is achieved by transferring the ensemble states with ensemble variables
θ
. The
DDF demands the construction of diversified ensemble statuses with a heterogeneous model library.
Relevant studies highlight that diverse network structure plays a crucial role in ensemble defense[
15
].
In our solution, we evaluated the heterogeneousness and diversity between ensemble subnetworks by
the poor adversarial transferability of the attacks.
2.2 Adversarial Robust Lottery Subnetwork
For purpose of testifying the multi-sparsity adversarial robust lottery subnetworks can achieve better
adversarial transferable diversity under different network structures. We picked four representative
network structures, ResNet18, ResNet34, WideResNet32, and WideResNet38[
16
,
17
], as the basic
architecture of our experiments and gained the sparse lottery ticket from original dense networks.
Following [
14
], we applied adversarial training to gain robustness of our subnetworks during pruning.
It can be expressed as a min-max problem as Eq.2.
arg min
λX
i
max
kδk≤εl(f(ˆ
λω, xi+δ), yi)s.t. kˆ
λk0k(2)
Where lpresents the loss function, fis a randomly initialized network with random weights, and
δ
is the perturbation with maximum value
ε
. In order to satisfy the sparsity of the subnetworks, we
set a learnable weight
λ
and a binary weight
ˆ
λ∈ {0,1}
that correspond to its dimensions[
18
,
19
].
ˆ
λ
is meant to activate a small number of primary weights
ω
. With the primary network parameters
weighted by λ(0,1),fcan be effectively trained by small perturbations added to the input xi.
2.3 Dynamical Ensemble for The Lottery Subnetworks
Through our method, we obtained fourty subnetworks with different basic structures and sparsity.
Based on the robust lottery subnetwork library, we define the randomized ensemble attribute parameter
θ=θ(α, n, s), which determines the ensemble states. It can be achieved in the following steps:
(A)Construct a robust lottery subnetworks library with adversarial transferable diversity, including
forty sparse subnetworks. Each of ResNet18/ResNet34/WideResNet32/WideResNet38 owns ten.
(B)Set the range for
α
and s. We brought four basic structures into the selection rather than
the entire library, increasing the possibility of including more structures. It realized by
α=
2
摘要:

DynamicStochasticEnsemblewithAdversarialRobustLotteryTicketSubnetworksQiPeng,1WenlinLiu,2RuoxiQin,1LibinHou,1BinYan,1LinyuanWang11PLAStrategySupportForceInformationEngineeringUniversity2UniversityofScienceandTechnologyofChina(USTC)pqmailkwiki@163.com,wanglinyuanwly@163.comAbstractAdversarialattacks...

展开>> 收起<<
Dynamic Stochastic Ensemble with Adversarial Robust Lottery Ticket Subnetworks Qi Peng1Wenlin Liu2Ruoxi Qin1Libin Hou1Bin Yan1Linyuan Wang1.pdf

共6页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:6 页 大小:815.43KB 格式:PDF 时间:2025-05-03

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 6
客服
关注