Adversarial Attack Against Image -Based Localization Neural Networks Meir Brand Itay Naeh Daniel Teitelman

2025-04-24 0 0 1.35MB 13 页 10玖币
侵权投诉
Adversarial Attack Against Image-Based
Localization Neural Networks
Meir Brand, Itay Naeh, Daniel Teitelman
Rafael - Advanced Defense Systems Ltd., Israel
br.meir@gmail.com, itay@naeh.us, dtyytlman@gmail.com
Abstract
In this paper, we present a proof of concept for adversarially attacking the image-based localization
module of an autonomous vehicle. This attack aims to cause the vehicle to perform wrong navigational
decisions and prevent it from reaching a desired predefined destination in a simulated urban
environment. A database of rendered images allowed us to train a deep neural network that performs a
localization task and implement, develop and assess the adversarial pattern. Our tests show that using
this adversarial attack we can prevent the vehicle from turning at a given intersection. This is done by
manipulating the vehicle’s navigational module to falsely estimate its current position and thus fail to
initialize the turning procedure until the vehicle misses the last opportunity to perform a safe turn in a
given intersection.
Introduction
The future of transportation will be autonomous [2]. Self-driving cars [4] and drones [3] are already on
the ground and in the air. These platforms rely on multiple sensors such as LiDAR [5,6,7], GPS [8],
cameras [9,10], and additional IMUs [11] for estimating the state of the observed environment.
Incorporating additional algorithmic approaches, such as sensor fusion [12,13,14,15], one can
determine more accurately the action a given platform should take at each step of the way in order to
reach its goal destination. In urban areas, GPS accuracy may be hindered by a high density of buildings
[16]. In such cases, autonomous platforms can use image-based localization for self-positioning. This
type of localization is done by taking a single image from the usually forward-looking camera of the
vehicle and passing it through a neural network that was trained for providing the platform position and
orientation, in the environment the network was trained upon.
Adversarial attacks are carefully crafted patterns that disrupt a neural network output when introduced
as an input [17, 18, 19]. In this work we have crafted a patch that was implemented on a street billboard
placed in front of a traffic intersection. The patch pattern disrupts the navigation system of the
autonomous vehicle and prevents it from reaching its desired destination. Exposing such vulnerabilities
in deep neural networks in general is beneficial to industry and academic researchers, in order to
increase awareness of this matter. Figure 1 shows the top-view of the simulated urban environment (a),
navigation path (ABC) (b), and a single image taken by the car along the driving route (c). This
simulated environment was modeled with Brushify-Urban Buildings Pack [20] on the Unreal Engine 4
[1].
Figure 1 - The simulated urban environment
Related Work
The field of adversarial attacks against neural networks for autonomous vehicles spans over a wide
range of research areas, and in this section we briefly review the most widely-known works. In his
pioneering work I. Goodfellow et. al. [17] described a way to generate adversarial examples, followed
by N. Carlini et. al. [22]. In 2017 a group of researchers led by T. B. Brown [23] has shown how a
localized adversarial patch in the real world can hinder the prediction of a neural network completely,
while encompassing only a small part of the image. From 2020 two additional important papers were
published, which both continue the leap into making adversarial attacks applicable to the real world, in
the first one by Z. Kong [24] it was shown that an advertising sign can be used as an effective adversarial
patch for influencing the steering module of an autonomous vehicle, and a work by H. Salman [25]
described the existence of unadversarial examples, not just for 2D objects but for 3D as well. The
described patch is robust to different lighting conditions, orientations and camera views. To the best of
our knowledge, attacks of a self localization neural network were not published before.
Research method
This proof of concept aims to demonstrate how an adversarial patch on a street billboard can induce
wrong navigational decisions. Real world autonomous platforms are varied and diverse, so in order to
approach this abundance we will define a target platform which consists of an autonomous car with
two relevant modules. The first is the navigation module, which dictates the general path that the
platform should take from one point to another and whether to turn or not at each junction. The second
module is the automatic driver, which is a tactical module that determines how to handle the car's
immediate actions, keeping to the lane, turning at the proper arc, and not causing any accidents with
the surrounding actors and environment. It would be helpful to consider the navigation module as the
master system who orders the automatic driver where to turn.
Within the scope of this work only the navigation module will be discussed since the attack will be
performed before the automatic driver will be engaged for turning.
In our scenario the navigation plan of the car is to drive along a path ABC shown in Fig 1.a, with the
help of the navigation module. The adversarial attack on the navigation module will cause the car to
miss the right turn in intersection B and keep driving straight to point D instead of driving to point C.
The main components needed for performing our adversarial research were:
a. Simulating an urban environment and controlling it by an external software (python) to produce a
database of street images.
b. Training a localization CNN which can estimate vehicle position and orientation {x, y, direction}
with sufficient accuracy based on a single image taken from the vehicle's front camera.
c. Crafting an adversarial patch in order to produce an adversarial attack on the localization CNN.
d. Handling the navigation system, which is in charge of reporting to the vehicle self-driving system
its basic commands ("continue straight" or "intersection ahead turn right", etc.).
a) The simulated urban environment
There are a variety of relevant Unreal Engine environments for this kind of demonstration (City,
Neighborhood, Urban etc.). The main parameters we chose for this work were: an environment that
simulates city streets which are not too dense, the environment has about 10 streets with few
intersections between them, and the buildings are diverse. The streets have sidewalks with some trees
and plants along them and driving lanes for cars. We have found that the Brushify-Urban Buildings
Pack [21] was suitable for this research. The city map (upper view of the city) is shown in Fig 1.a. The
yellow border (squares) in this figure are the buildings contours. The variety of buildings which include
shape, texture, color and height can be seen in Fig 1.b. The typical width of a traffic road in this
environment is about 50 [m] (4 lanes) which made us choose a wide field of view (FOV) of 120° for
the vehicle’s camera. This setup was used for the images dataset taken for the training of the localization
CNN, for the adversarial patch development and for the vehicle path navigation. Brushify-Urban
Buildings Pack has further entities like lakes, open fields and peripheral suburbs outside the city zone
shown in Fig 1.a. For this research we have limited our zone of interest to be the streets shown in this
figure. This zone is about 1.4 [km] length and 1.1 [km] width.
b) Localization Convolutional Neural Network (CNN)
The input/output of our localization CNN was selected to be:
- Input image size of 224x224x3 RGB pixels.
- Output vector consists of 3 parameters: x coordinate, y coordinate and θ which is the angle the car
is facing.
- All images were taken about 0.4÷1.0 [m] above the ground in order to represent images taken from
a car’s front view camera. Also, [deg] of elevation was given to the camera to include more
informative features in the FOV.
摘要:

AdversarialAttackAgainstImage-BasedLocalizationNeuralNetworksMeirBrand,ItayNaeh,DanielTeitelmanRafael-AdvancedDefenseSystemsLtd.,Israelbr.meir@gmail.com,itay@naeh.us,dtyytlman@gmail.comAbstractInthispaper,wepresentaproofofconceptforadversariallyattackingtheimage-basedlocalizationmoduleofanautonomous...

展开>> 收起<<
Adversarial Attack Against Image -Based Localization Neural Networks Meir Brand Itay Naeh Daniel Teitelman.pdf

共13页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:13 页 大小:1.35MB 格式:PDF 时间:2025-04-24

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 13
客服
关注