2
into the original graph. Specifically, the proposed GANI can
be divided into the following two parts. In the feature gen-
eration procedure, the statistical feature information of the
original nodes is utilized to make our new nodes similar to
the normal nodes. In the link generation procedure, we em-
ploy evolutionary algorithms to help find the optimal links
which can obtain the best attack performance. Moreover, we
utilize the decrease of node homophily [18], [19] to reduce
the randomness of the optimal neighbor selections. Finally,
we verify the superiority on both attack performance and
imperceptibility of the proposed attack strategy in datasets
with different feature types through extensive experiments.
Our major contributions are as follows:
•We present new insights on promoting the imper-
ceptibility of NIA from both topology and feature
domains which were seldom discussed in previous
studies.
•We propose a powerful NIA method by combining
the statistical information-based feature generation
step with evolutionary algorithm-based neighbor se-
lection step on the global poisoning attack scenario.
•We conduct extensive experiments on several
datasets with different types of features to verify the
effectiveness and imperceptibility of the proposed
method.
The remainder of this work is organized as follows.
Section 2 discusses the related work of current attack and
defense on graph neural networks. Some preliminaries
about graph neural networks, node injection attacks, and
the definition of homophily are given in Section 3. Then,
we introduce the setting of attack budgets, methods for
feature generation and neighbor selection of injected nodes
in Section 4. In Section 5, we present the corresponding ex-
periments and discussions on both attack performance and
the imperceptible effect of perturbations in detail. Finally,
we conclude our work in Section 6.
2 RELATED WORK
In this work, we aim at achieving global attacks, which is
to reduce the overall effectiveness of the models, rather than
focus on influencing some target nodes. Moreover, we con-
centrate on poisoning attacks on graphs, for which we will
apply our attack strategy on the retraining models, rather
than on the fixed models with the same parameters before
and after the attack (i.e., evasion attacks). Compared with
evasion attacks, poisoning attacks will be more realistic but
difficult due to the model retraining. In the following, we
will briefly introduce current efforts on typical adversarial
attacks and node injection attacks, respectively. Then we will
briefly introduce current (defended) methods on graphs.
2.1 Typical Adversarial Attacks on GNNs
In most existing studies, attacks are launched by modifying
the links or features of the original nodes. Z¨ugner et al. [13]
proposed the first graph adversarial attack method, Nettack,
to point out the security concern in current GNNs by
utilizing the classification margin of the nodes predicted in
different classes. Dai et al. [15] put forward a reinforcement
learning-based strategy, RL-S2V, to achieve the targeted
evasion attacks. In order to achieve global attacks, Z¨ugner
and G ¨
unnemann [14] further proposed Metattack (META
for short in this work) by utilizing the meta gradient of
loss functions. However, all of the above strategies are de-
signed based on the assumption that the attackers can easily
modify the original graphs, which is not realistic in the real
life. Moreover, we also find that directly extending current
methods to the node injection scenario can not achieve the
same performance, which will be shown in our experiments
later.
2.2 Node Injection Attacks on GNNs
To address the above concerns, some node injection attack
methods have been proposed recently. Sun et al. [20] intro-
duced a strong NIA strategy named NIPA which utilizes the
deep Q-learning method to achieve the global poisoning at-
tacks. Aiming at targeted poisoning attacks, Wang et al. [21]
proposed AFGSM by considering the gradient of loss func-
tion of targeted nodes. Zou et al. [22] introduced TDGIA,
a global evasion injection method combining topological
defective edge selection strategy and smooth feature genera-
tions. Moreover, Tao et al. [23] proposed G-NIA by modeling
the attack process via a parametric model to preserve the
learned patterns and further be utilized in the inferring
phase. Recently, Chen et al. [24] proposed HAO by utilizing
the homophily ratio of nodes as the unnoticeable constraint
to further improve the unnoticeability of adversarial attacks.
However, the two concerns (i.e., feature inconsistency
and budget constraint) still have not been well addressed
in prior work. Specifically, NIPA generates the adversarial
features through the average features of all nodes plus Gaus-
sian distribution. Since different datasets may have different
types of features (i.e., binary or continuous features), and the
features of each node usually will be an extremely sparse
vector, the Gaussian noise may dominate the feature of
new nodes, causing the situation that the features of newly
injected nodes will be very different from those original
nodes. Moreover, AFGSM simply generates binary features
for the new nodes, while TDGIA only generates continuous
features and fails to limit the feature-attack budget. Among
them, only G-NIA solves the feature consistent problem.
At the same time, none of the above work tackles the
degree problems of the new nodes. Though HAO retains the
homophily ratio of corresponding nodes during injections
in a local perspective, it may break other global network
properties or constraints such as degree distributions, the
range/type of node features which can be easily detected.
Therefore, current NIA methods may break the unno-
ticeable principle as the new nodes have a totally different
range distribution or belong to different types (i.e., binary
or continuous format) of features. They may also make the
injection more obvious by assuming that all new nodes have
the same degree. In order to make the NIA more realistic,
we aim to address the above issues that exist in current
strategies. Particularly, to make the adversarial attacks more
general, we focus on a more practical and difficult scenario,
namely global poisoning attacks.
2.3 Defense on GNNs
Standard GNNs usually utilize the neighborhood aggrega-
tion mechanism to help obtain a better representation of