CERTIFIED ROBUSTNESS OF QUANTUM CLASSIFIERS AGAINST ADVERSARIAL EXAMPLES THROUGH QUANTUM NOISE

2025-09-29 2 0 319.65KB 6 页 10玖币
侵权投诉
CERTIFIED ROBUSTNESS OF QUANTUM CLASSIFIERS AGAINST ADVERSARIAL
EXAMPLES THROUGH QUANTUM NOISE
Jhih-Cing Huang1Yu-Lin Tsai2Chao-Han Huck Yang3Cheng-Fang Su2
Chia-Mu Yu2Pin-Yu Chen4Sy-Yen Kuo1
1National Taiwan University, Taiwan 2National Yang Ming Chiao Tung University, Taiwan
3Georgia Institute of Technology, GA, USA 4IBM Research, NY, USA
ABSTRACT
Recently, quantum classifiers have been found to be vulner-
able to adversarial attacks, in which quantum classifiers are
deceived by imperceptible noises, leading to misclassification.
In this paper, we propose the
first theoretical study
demon-
strating that
adding quantum random rotation noise can
improve robustness
in quantum classifiers against adversarial
attacks. We link the definition of differential privacy and show
that the quantum classifier trained with the natural presence
of additive noise is differentially private. Finally, we derive a
certified robustness bound to enable quantum classifiers to de-
fend against adversarial examples, supported by experimental
results simulated with noises from IBM’s 7-qubits device.
1. INTRODUCTION
The joint study of quantum computing and machine learning
opens a new research area named quantum machine learning
[
1
]. For example, quantum classifiers [
2
,
3
,
4
,
5
,
6
] solve clas-
sification problems similar to the classical ones. Furthermore,
the data availability of quantum classifiers is wider since quan-
tum data, which is generated from natural or artificial quantum
systems, can also be included. In particular, quantum neural
network (QNN) is a pure quantum model [
7
,
8
,
9
,
10
,
11
] that
contains trainable parameters. Thus, QNN is also identified
as a parameterized quantum circuit. Aside from classical ma-
chine learning, QNNs predict labels of data by measuring the
ancilla qubits that are carried through along with the training
data. Based on the training data, QNNs utilize optimizers sim-
ilar to the classical ones to optimize parameters with specific
techniques [
12
,
13
,
14
] to calculate the gradient. However,
recent findings also suggest that the QNNs-based model is also
sensitive to small gradient-based perturbation with malicious
misclassification results. To deal with the adversarial examples
on QNNs, Weber et al. [
15
] find out a tight condition for the
robustness against the adversarial perturbation for QNNs. Fur-
thermore, Du et al. [
16
] propose that QNN can resist against
the adversarial perturbation via depolarization noise.
Contribution
In this paper, inspired by [
16
], we propose
one
formal theoretical analysis on robustness of QNNs
,
where the robustness can be improved via the added quan-
tum random rotation noise. The quantum rotation noise in our
theoretical solution additionally applied to QNNs enables us
to derive a certified robustness bound.
1.1. Related Work
Randomized Smoothing
Classical randomized smoothing
[
17
] protects classical classifiers through adding noise, such
as Gaussian Noise, and by sampling multiple times, the label
with the highest probability is assigned to the data point; i.e.,
g(x) = arg max
cκ
P(f(x+) = c)
, where
κ
is the set of all
possible classes and N(0, σ2I).
Differential Privacy
Differential privacy [
18
] is one of
the de facto standard in the realm of data privacy. Furthermore,
differential privacy quantifies the amount of privacy protection.
Formally, we can define the notion
-differential privacy in
the following. Let
be a positive real number and
A
be a
randomized algorithm that takes a dataset as input. Let
im A
denote the image of
A
. The algorithm
A
is said to provide
-
differential privacy if, for all datasets
D1
and
D2
that differ on
a single element (i.e., the data of one person), and all subsets
S
of
im A
, we have
Pr[A(D1)S]e·Pr[A(D2)S]
,
where the probability is taken over the randomness used by
the algorithm.
Certified Robustness
For a certifiably robust [
19
,
20
] clas-
sical classifier with robustness bound
d
, the predictions of input
data points
x
and
x0
are guaranteed to be the same where
x
and
x0
are neighboring data points with
li
distance of
x
and
x0
less than threshold
d
. The definition of certified robustness
of quantum classifiers with robustness bound
τD
is similar to
classical one.
Namely, for two quantum states
σ
and
ρ
, if
||σρ|| <
τD
, where the distance is defined by the trace distance(i.e.
||σρ|| := T r(||σρ||)
2
), the majority of output labels of
σ
and ρare guaranteed to be the same.
QNN Robustness
Recently, some research efforts have
been on developing the robustness for QNNs. For example,
Weber et al. [
21
] formulate this adversarial robustness topic on
QNNs as semidefinite programs and considers higher statistical
arXiv:2211.00887v2 [quant-ph] 28 Apr 2023
moments of the observable and generalized bounds. Weber et
al. [
15
] establsih a link between binary quantum hypothesis
testing and provably robust QNNs, resulting in a robustness
condition for the amount of noise a classifier can tolerate. Du
et al. [
16
] finds that the depolarization noise in QNNs helps
derive a robustness bound, where the robustness improves with
increasing noise.
2. BACKGROUND KNOWLEDGE
Quantum Classifier
Parameterized quantum circuits are quan-
tum frameworks that depend on trainable parameters, and can
therefore be optimized [
22
]. Variational quantum classifier al-
gorithm is under this framework, being the predominant basis
of quantum classifiers. Several optimization methods are de-
veloped and different quantum classifiers are proposed. In our
work, we use the optimization method called parameter-shift
rule [
12
]. That is, the output of a variational quantum circuit,
denoted by f(θ), is parameterized by θ=θ1, θ2, . . . .
To optimize
θ
, we need to acquire the partial derivative of
f(θ)
which can be expressed as a linear combination of other
quantum functions, typically derived from the same circuit
with a shift of
θ
. That is, the same variational quantum circuit
can be used to compute the partial derivatives of
f(θ)
. Besides,
we need to encode our classical data and int this aspect we
adopt the amplitude encoding method [
23
,
24
] . To encode
data efficiently, amplitude encoding is to transform classical
data into linear combination of independent quantum states
with the magnitudes of features being the weights which can
be expressed as
Sx|0i=1
|x|P2n
n=1 xi|ii
, where each
xi
is
a feature (component) of data point
x
, and
|ii
is a basis of
n
-qubit space. In our work, we assume a
K
-class quantum
classifier of which the output is the predicted label of the input
state.
Let
Πk
be a positive operator-valued measure (POVM)
and
E
be quantum operations of the quantum classifier. Define
yk(σ)T rkE(σ⊗ |aiha|))
which denotes the probabil-
ity with which input state
σ
is assigned to the label
k, k
0,1,2, ...., K 1.
and
yk(σ) = T rkE(⊗ |aiha|))
which denotes the probability with which input state
σ
is
assigned to the label
k, k 0,1,2, ...., K 1
under noise,
where
R
is the noise operator. Since it is impossible to derive
actual
yk(σ)
and
yk(σ)
, we sample N times to estimate
yk
and
yk
with
y(N)
k(σ)
and
yk
(N)(σ)
, respectively. In our work,
we assume
K= 2
(binary classification) for convenience
but similar reasoning can also be applied in the multiclass
scenario.
Quantum Differential Privacy
Similar to classical
-
differential privacy, we adopt the quantum version of
-
differential privacy from [
25
]. Furthermore, we express a
quantum classifier under
K
-class classification problem to
have satisfied -differential privacy if the following holds.
Let
be a positive real number and
M
be a quantum
algorithm that takes a quantum state as input. The al-
gorithm
M
is said to provide quantum
-differential pri-
vacy if, for all input quantum states
σ
and
ρ
such that
τ(σ, ρ)< τD
, and for all
Πi, i 0,1,2,3..., K 1
, we
have
Pr[M(σ, Πi)] exp ()·Pr[M(ρ, Πi)]
, and therefore
e
yk(ρ)
yk(σ)e.
3. PROPOSED METHOD
We begin with the idea of simulating randomized smoothing
in quantum machine learning. We aim to add perturbation
on qubits and consider random rotations on the Bloch sphere
as a counterpart of randomized smoothing. Then, we apply
rotation gates, as shown in Fig. 1, on each input qubit and
set up rotation angles with random variables generated from
classical computers.
Fig. 1: The rotation circuit with output density matrix (σ).
Our proposed method is summarized in Algorithm 1. Our
method does not assume details of quantum classifiers, and
thus is general for model agnostic. Our method guarantees the
accuracy of original quantum classifiers, and the corresponding
robustness bound is also applicable for all kinds of quantum
classifiers. Further analysis of Algorithm 1 is also proven in
subsequent section.
Algorithm 1 Quantum model under quantum noise rotation
Input σ: where σis density matrix of n-dim data.
Output f(θ, σ)
1. For a chosen quantum classifier, add Pauli-X operators
before each input qubit.
2. Generate n random variables
θ1, θ2, ..., θn
subject to
0< h1<tan θi< h2for all i∈ {1,2, . . . , n}.
3. Set up rotation angles of additional Pauli-X operators
with θ1, θ2, ..., θn
4. Execute the quantum classifier
N
times to get the score
vector f(θ, σ).
4. THEORETICAL ANALYSIS
Our goal is to demonstrate that random rotation noises can
be used to protect quantum classifiers against adversarial per-
turbations. This can be divided into three main steps. We
first show the invariance of outcomes between noisy classifiers
and original ones. Then we demonstrate how random rotation
noises improve quantum differential privacy for the classi-
fiers. Ultimately, we can show the connection between the
differential privacy and the better robustness against general
adversaries of classifiers.
摘要:

CERTIFIEDROBUSTNESSOFQUANTUMCLASSIFIERSAGAINSTADVERSARIALEXAMPLESTHROUGHQUANTUMNOISEJhih-CingHuang1Yu-LinTsai2Chao-HanHuckYang3Cheng-FangSu2Chia-MuYu2Pin-YuChen4Sy-YenKuo11NationalTaiwanUniversity,Taiwan2NationalYangMingChiaoTungUniversity,Taiwan3GeorgiaInstituteofTechnology,GA,USA4IBMResearch,NY,US...

展开>> 收起<<
CERTIFIED ROBUSTNESS OF QUANTUM CLASSIFIERS AGAINST ADVERSARIAL EXAMPLES THROUGH QUANTUM NOISE.pdf

共6页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:6 页 大小:319.65KB 格式:PDF 时间:2025-09-29

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 6
客服
关注