2
activates the backdoor in the backdoored drug screening system,
causing the misclassification result. Specifically, the backdoored
drug screening system classifies the backdoor molecular graph
(true label as the poisonous drug) as the benign drug, causing
the wrong classification in the drug property.
Several backdoor attacks [
26
], [
27
], [
28
], [
29
] have been
proposed to reveal the vulnerability of GNNs in the training
phase. According to the way of trigger generation, the existing
attacks are categorized as randomly attacks and gradient-
based generative attacks. In previous works [
26
], [
27
], [
29
],
subgraph is randomly generated as the trigger that satisfies
certain network distribution (e.g., erd
˝
os-r
´
enyi, small world,
and preferential attachment). They are easy to implement, but
suffer from instability of the attack effect. Another method
[
28
] adopts the gradient-based generative strategy to obtain
the subgraph as the trigger, whereas it requires more time and
more knowledge (e.g., the target model’s structure, parameters)
to optimize the trigger.
Since different subgraphs can be used as triggers, it is
important to understand how different subgraphs can affect
the backdoor attack’s impact on practical applications. Motifs,
which are recurrent and statistically significant subgraphs in
graphs, are particularly relevant to the function of the graphs.
They serve as the fundamental building blocks of graphs, and
contain rich structural information. Motifs have been exten-
sively studied in various domains, such as biochemistry [
30
],
neuroscience [
31
], and social networks [
32
], and have been
shown to play crucial roles in the function and behavior of
these systems. In the context of backdoor attacks on GNNs,
motifs can serve as a powerful tool to bridge the gap between
the impact of different subgraphs as triggers and the underlying
graph structures. By leveraging the intrinsic properties of motifs,
we can generate more effective and stable triggers that are more
closely related to the graph structure and function, and thereby
gain deeper insights into the vulnerability of GNNs to backdoor
attacks.
To sum up, there are three challenges for backdoor attacks
against GNNs. (i) Trigger Structure Limitation. There are many
structures that satisfy the trigger perturbation limit, and it is
difficult to efficiently determine the appropriate trigger structure.
(ii) Attack Knowledge Limitation. Without the target model
feedback information, it is difficult for the attacker to achieve
a stable and effective attack. (iii) Injection Position Limitation.
The space of positions where the trigger can choose to inject is
huge, and it is challenging to select the well trigger injection
position efficiently.
To cope with the above challenges, we propose a novel
backdoor attack against GNNs based on motifs, namely Motif-
Backdoor. Specifically, to tackle the challenge (i), we analyze
the distribution of motifs in the training graph, and select an
appropriate motif as the trigger. This is a way of generating
the trigger based on statistics, which is much faster than
optimization based methods. For the knowledge limitation
mentioned in the challenge (ii), we construct a reliable shadow
model, which is based on the structure of the state-of-the-art
(SOTA) models and the training data with confidence scores
for the output of the target model. To address the challenge
(iii), we leverage strategies of the graph index (graph structure
perspective) and dropping the target node (model feedback
perspective) to measure the node importance of the graph.
The operation can select the effect trigger injection position.
Empirically, our approach achieves the SOTA results on four
real-world datasets and three different popular GNNs compared
with five baselines. Additionally, we propose a possible defense
against Motif-Backdoor, and the experiments testify that it only
reduces attack success rate of Motif-Backdoor by an average
of 4.17% on several well-performing GNNs. Compared to the
existing methods, our motif-based backdoor attack method
has several advantages. Firstly, by leveraging the intrinsic
properties of motifs, we are able to generate more stable
and effective triggers with higher success rates. Secondly,
our method requires less knowledge of the target model’s
structure and parameters, making it more practical for real-
world scenarios. Finally, the use of motifs provides a new
perspective for exploring the vulnerability of GNNs, which
has not been studied in previous works.
The main contributions of this paper are summarized as
follows:
•
We reveal the impact of trigger structure and graph topology
on backdoor attack performance from the perspective
of motifs, and obtain some novel insights, e.g., using
subgraphs that appear less frequently in the graph as the
trigger realizes better attack performance. Furthermore,
we provide further explanations for the insight.
•
Inspired by the motifs, we propose an effective attack
framework, namely Motif-Backdoor. It quickly selects the
trigger based on the distribution of motifs in the dataset.
Besides, a shadow model is constructed to transfer the
attack from white-box to black-box scenario. For the
trigger injection position, we propose strategies of the
graph index and dropping the target node to measure the
node importance of the graph.
•
Extensive experiments on three different popular GNNs
over four real world datasets demonstrate that Motif-
Backdoor implements the SOTA performance, e.g., Motif-
Backdoor improves the attack success rate by 14.73%
compared with baselines on average. Moreover, the ex-
periments testify that Motif-Backdoor is effective against
a possible defense strategy as well.
The rests of the paper are organized as follows. Related
works are introduced in Section II. The problem definition and
threat model are described in Section III. The backdoor attack
from the motifs is in Section IV, while the proposed method
is detailed in Section V. Experiment results and discussion are
shown in Section VI. Finally, we conclude our work.
II. RELATED WORK
Our work focuses on the backdoor attack against GNNs from
motifs. In this section, we briefly review the related work upon
two categories: backdoor attacks against GNNs and motifs for
GNNs.
A. Backdoor Attacks on GNNs
Based on the generation method of the trigger, the existing
backdoor attack methods can be divided into two categories: