Understanding Key Point Cloud Features for Development Three-dimensional Adversarial Attacks

2025-05-06 0 0 1.13MB 29 页 10玖币
侵权投诉
Understanding Key Point Cloud Features for
Development Three-dimensional Adversarial Attacks
Hanieh Naderia, Chinthaka Dineshb,c, Ivan V. Baji´
cc, Shohreh Kasaeid
aCollege of Interdisciplinary Science and Technologies, University of Tehran, Tehran, Iran
bNortheastern University, Vancouver, BC, Canada
cSchool of Engineering Science, Simon Fraser University, BC, Canada
dDepartment of Computer Engineering, Sharif University of Technology, Tehran, Iran
Abstract
Adversarial attacks pose serious challenges for deep neural network (DNN)-based
analysis of various input signals. In the case of three-dimensional point clouds,
methods have been developed to identify points that play a key role in network
decision, and these become crucial in generating existing adversarial attacks. For
example, a saliency map approach is a popular method for identifying adversar-
ial drop points, whose removal would significantly impact the network decision.
This paper seeks to enhance the understanding of three-dimensional adversarial
attacks by exploring which point cloud features are most important for predicting
adversarial points. Specifically, Fourteen key point cloud features such as edge
intensity and distance from the centroid are defined, and multiple linear regres-
sion is employed to assess their predictive power for adversarial points. Based
on critical feature selection insights, a new attack method has been developed to
evaluate whether the selected features can generate an attack successfully. Un-
like traditional attack methods that rely on model-specific vulnerabilities, this
approach focuses on the intrinsic characteristics of the point clouds themselves.
It is demonstrated that these features can predict adversarial points across four
different DNN architectures— Point Network (PointNet), PointNet++, Dynamic
Graph Convolutional Neural Networks (DGCNN), and Point Convolutional Net-
work (PointConv)—outperforming random guessing and achieving results com-
parable to saliency map-based attacks. This study has important engineering ap-
plications, such as enhancing the security and robustness of three-dimensional
point cloud-based systems in fields like robotics and autonomous driving.
Keywords: Point cloud processing, adversarial example, Three-Dimensional
adversarial attack, graph signal processing, multiple linear regression, artificial
Preprint submitted to Elsevier December 19, 2024
arXiv:2210.14164v4 [cs.CV] 18 Dec 2024
intelligence applications in point cloud security, deep neural networks,
adversarial robustness, autonomous driving, robotics
1. Introduction
DNNs have become a go-to approach for many problems in image processing
and computer vision [1, 2, 3, 4, 5] due to their ability to model complex input-
output relationships from a relatively limited set of data. However, studies have
also shown that DNNs are vulnerable to adversarial attacks [6, 7]. An adversarial
attack involves constructing an input to the model (adversarial example) whose
purpose is to cause the model to make a wrong decision. Much literature has
been devoted to the construction of Two-Dimensional (2D) adversarial examples
for image analysis models and the exploration of related defenses [8, 9, 6, 7, 10].
Research on adversarial attacks and defenses has gradually expanded to Three-
Dimensional (3D) point cloud models as well, especially point cloud classifica-
tion [11, 12, 13, 14, 15, 16, 17, 18, 19].
Point clouds themselves have become an increasingly important research topic
[20, 21, 22, 23]. Given a deep model for point cloud classification, a number of
methods have been proposed to determine critical points that could be used in an
adversarial attack [24, 25, 26]. For example, Zheng et al. [24] proposed a differen-
tiable method of shifting points to the center of the cloud, known as a saliency map
technique, which approximates point dropping and assigns contribution scores to
input points based on the resulting loss value. Other methods to determine critical
points similarly try to estimate the effect of point disturbance on the output.
Once the critical points have been determined, they can be used to create ad-
versarial examples. Several recent studies [11, 27, 12] use critical points as ini-
tial positions and then introduce perturbations to create attacks. Usually, some
distance-related criteria, such as Hausdorff [12] or the Chamfer distance [12],
are used to constrain perturbations around critical positions. Instead of perturba-
tion, another kind of attack drops the critical points; for example, the well-known
Drop100 and Drop200 attacks [24] drop, respectively, 100 and 200 points from
the point cloud in order to force the model to make a wrong decision. These are
considered to be among the most challenging attacks to defend against [28, 29,
11, 30].
The methods mentioned above, and others in the literature, require access to
the DNN model in order to determine critical points. For example, “white-box”
attacks have access to the model’s internal architecture and parameters, while
2
“black-box” attacks are able to query the model and obtain its output, but without
the knowledge of internal details [15]. These approaches are in line with the pop-
ular view in the literature [7]: that the existence of adversarial examples is a flaw
of the model, that they exist because the model is overly parametrized, nonlinear,
etc. According to this reasoning, each model has its own flaws, i.e., its own criti-
cal points. Another view is that adversarial examples are consequences of the data
distribution on which the model is trained [31]. This would suggest that different
models trained on the same data may share some adversarial examples, but they
have to be determined in the context of the data distribution.
In this paper, a different point of view is presented, demonstrating that critical
points in point clouds can be determined from the features of the point cloud itself.
To our knowledge, this is the first work in the point cloud literature to explore
which features play a key role in adversarial attacks. In a broader context, we
suggest that critical points in point clouds are inherent characteristics of the point
clouds themselves, which give them their crucial properties that are important in
their analysis, i.e., what makes an airplane - an airplane, or a chair - a chair.
In addition to the novel point of view, a new attack has been created based on
the methodology, leading to a less computationally expensive way of generating
adversarial attacks compared to other methods.
Furthermore, the attack has the potential to generalize better and be more
transferrable to different models.
The rest of the paper is organized as follows. The related work is discussed
in Section 2, together with a more detailed explanation of our contribution. Our
proposed methodology is presented in Sections 3 and 4. The experimental results
are reported in Section 5, followed by conclusions in Section 6.
2. Related Work
2.1. Deep models for point cloud analysis
PointNet [32] was a pioneering approach for DNN-based point cloud analysis.
Learnt features are extracted from individual points in the input point cloud and
then aggregated to global features via max-pooling. As a result of these global
features, a shape can be summarized by a sparse set of key points, also called the
critical point set. The authors of PointNet showed that any set of points between
the critical point set and another set called the upper bound shape will give the
same set of global features, and thus lead to the same network decision. While
this proved a certain level of robustness of PointNet to input perturbations, it also
3
pointed to strong reliance on the critical point set, which was subsequently used
to design various adversarial attacks.
PointNet has inspired much subsequent work on DNN-based point cloud anal-
ysis, of which we review only three approaches subsequently used in our ex-
periments. One of these is PointNet++ [33], a hierarchical network designed to
capture fine geometric structures in a point cloud. Three layers make up Point-
Net++: the sampling layer, the grouping layer, and the PointNet-based learning
layer. These three layers are repeated in PointNet++ to learn local geometric
structures. Another representative work is Dynamic Graph Convolutional Neu-
ral Network (DGCNN) [34]. It exploits local geometric structures by creating a
local neighborhood graph and using convolution-like operations on the edges con-
necting neighboring pairs of points. PointConv [35] is another architecture that
extends PointNet by incorporating convolutional layers that work on 3D point
clouds. To better handle local features in point cloud data, a multi-layer percep-
tron (MLP) is used to approximate weight functions, and inverse density scale is
used to re-weight these functions.
2.2. Adversarial attacks on point clouds
Point clouds are defined by the 3D coordinates of points making up the cloud.
Thus, adversarial attacks can be performed by adding, dropping, or shifting points
in the input cloud. An adversarial attack can be created by examining all points
in the input cloud, or just critical points as potential targets. Liu et al. [11] were
inspired by the success of gradient-guided attack methods, such as Fast Gradient
Sign Method (FGSM) [6] and Projected Gradient Descent (PGD) [36], on 2D im-
ages. They applied a similar methodology to develop adversarial attacks on 3D
point clouds. Similarly, the Carlini and Wagner (C&W) [9] optimization for find-
ing adversarial examples has also been transplanted to 3D data. For example, Tsai
et al. [37] use the C&W optimization formulation with an additional perturbation-
bound regularization to construct adversarial attacks. To generate an attack with
a minimum number of points, Kim et al. [38] extend the C&W formulation by
adding a term to constrain the number of perturbed points. The adversarial points
found in [38] were almost identical to the PointNet critical points.
Xiang et al. [12] demonstrated that PointNet can be fooled by shifting or
adding synthetic points or adding clusters and objects to the point cloud. To find
such adversarial examples, they applied the C&W strategy to the critical points,
rather than all points. Constraining the search space around critical points is some-
times necessary because an exhaustive search through an unconstrained 3D space
is infeasible. An attack method that uses the critical-point property for PointNet is
4
proposed by Yang et al. [27]. By recalculating the class-dependent importance for
each remaining point, they iteratively remove the most crucial point for the true
class. The authors noted that the critical points exist in different models and that
a universal point-dropping method should be developed for all models. Wicker et
al. [39] proposed randomly and iteratively determining the critical points and then
generating adversarial examples by dropping these points.
Arya et al. [40] identify critical points by calculating the largest magnitudes of
the loss gradient with respect to the points. After finding those points, the authors
propose a minimal set of adversarial points among critical points and perturb them
slightly to create adversarial examples. Zheng et al. [24] developed a more flexible
method that extends finding critical points to other deep models besides PointNet.
They introduced a saliency score defined as
si=r1+γ
i
L
ri
,(1)
where riis the distance of the i-th point to the cloud center, γis a hyperparameter,
and L
riis the gradient of the loss Lwith respect to the amount of shifting the point
towards the center. Adversarial examples are created by shifting the points with
high saliency scores towards the center, so that they will not affect the surfaces
much.
In addition to the methods for creating adversarial attacks on point clouds, a
number of methods for defending against these attacks have been developed [15].
For 3D point cloud classification, adversarial training and point removal as a
pre-processing step in training have been extensively studied [41]. Some of the
methods proposed for point removal to improve robustness against adversarial
attacks include simple random sampling (SRS) [27], statistical outlier removal
(SOR) [30], Denoiser and UPsampler Network (DUP-Net) [30], high-frequency
removal [29], and salient point removal [11].
2.3. Explainability Methods
Explainability of 3D point cloud deep models is an important emerging area of
research. Zhang et al. [24] introduced a class-attentive response map to visualize
activated regions in PointNet, while later work [42] focused on interpreting 3D
CNNs using statistical methods to evaluate convolution functions.
Nethod in [43] proposed iterative heatmaps to explain point cloud models, and
Atkinson et al. [44] introduced a novel classification method that enhances ex-
plainability by integrating multiple layers of human-interpretable insights. Other
notable approaches include PointMask [45], which used mutual information to
5
摘要:

UnderstandingKeyPointCloudFeaturesforDevelopmentThree-dimensionalAdversarialAttacksHaniehNaderia,ChinthakaDineshb,c,IvanV.Baji´cc,ShohrehKasaeidaCollegeofInterdisciplinaryScienceandTechnologies,UniversityofTehran,Tehran,IranbNortheasternUniversity,Vancouver,BC,CanadacSchoolofEngineeringScience,Simon...

展开>> 收起<<
Understanding Key Point Cloud Features for Development Three-dimensional Adversarial Attacks.pdf

共29页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:29 页 大小:1.13MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 29
客服
关注