A Cooperative Perception System Robust to Localization Errors Zhiying Song1 Fuxi Wen1 Hailiang Zhang1and Jun Li1 Abstract Cooperative perception is challenging for safety-

2025-04-27 0 0 423.03KB 6 页 10玖币
侵权投诉
A Cooperative Perception System Robust to Localization Errors
Zhiying Song1, Fuxi Wen1, Hailiang Zhang1and Jun Li1
Abstract—Cooperative perception is challenging for safety-
critical autonomous driving applications. The errors in the
shared position and pose cause an inaccurate relative transform
estimation and disrupt the robust mapping of the Ego vehicle. We
propose a distributed object-level cooperative perception system
called OptiMatch, in which the detected 3D bounding boxes and
local state information are shared between the connected vehicles.
To correct the noisy relative transform, the local measurements
of both connected vehicles (bounding boxes) are utilized, and an
optimal transport theory-based algorithm is developed to filter
out those objects jointly detected by the vehicles along with
their correspondence, constructing an associated co-visible set.
A correction transform is estimated from the matched object
pairs and further applied to the noisy relative transform, followed
by global fusion and dynamic mapping. Experiment results show
that robust performance is achieved for different levels of location
and heading errors, and the proposed framework outperforms the
state-of-the-art benchmark fusion schemes, including early, late,
and intermediate fusion, on average precision by a large margin
when location and/or heading errors occur.
Index Terms—Cooperative perception, vehicle-to-vehicle, posi-
tion error, heading error, optimal transport.
I. INTRODUCTION
Automated driving relies on the accurate perception of
the surrounding vehicles and dynamic environment. However,
automated vehicles are limited by the physical capabilities
(e.g. , resolution and detection range) of the onboard sensors,
therefore connected and automated vehicles (CAV) becomes a
promising paradigm in recent years.
CAVs are connected via vehicle-to-vehicle (V2V) or vehicle-
to-everything (V2X) communications and sense the surround-
ing environments through multi-agent cooperation. The effect
of cooperative driving is illustrated in Fig.1. In practice,
the effectiveness of cooperative perception depends on two
aspects: 1) real-time and reliable data transmission within the
limited network bandwidth, and 2) robust information fusion
under highly dynamic and noisy environments.
The primary bottleneck for cooperative perception is the
sharing of precise data with low latency and low communica-
tion burden [1]. Generally, sharing raw data provides the best
performance because the least amount of information is lost.
But it can easily overload the communication network with
a large amount of real-time data transmission. As a trade-off,
features extracted from the raw data by deep neural networks
can reduce the amount of data to be shared and simultaneously
maintain a good data fusion performance. To further reduce
the communication load, sharing fully processed data, such
as the information of the detected objects, requires fewer
1School of Vehicle and Mobility, Tsinghua University, Beijing 100084,
China. Email: {song-zy21, zhanghl22}@mails.tsinghua.edu.cn, {wenfuxi, li-
jun1958}@tsinghua.edu.cn. Corresponding author.
EgoEgo
x0
y0
x0
y0
Accurate
position
BusBus
xk
yk
xk
yk
Pk
MM
(a)
EgoEgo
x0
y0
x0
y0
xk
yk
xk
yk
Pk
Accurate
position
BusBus
MM
(b)
EgoEgo
x0
y0
x0
y0
xk
yk
xk
ykFake
Noisy position
Pk
Pk
Noisy position
Pk
Pk
BusBus
xk
yk
xk
yk
MM
MM
(c)
Fig. 1. Illustration of cooperative perception. Colored rectangles represent
perception of cooperative vehicles with corresponding colors. Solid lines
indicate direct perception, and dashed lines show fused results. (a) No
cooperation. The Ego might crash the pedestrian ˆ
Mbecause of the occlusion
of the bus. (b) Accurate cooperation. CAVksends its own accurate location
Pkand the relative location of ˆ
Mto the Ego. The crash might not happen.
(c) Inaccurate cooperation. CAVksends a noisy location ˆ
Pkto Ego, a fake
ˆ
Mwill appear from Ego’s perspective. The crash might still happen.
communication resources. In this paper, we fuse data from
different CAVs at the object level, sharing the 3D bounding
boxes, location, and pose information between the CAVs.
This minimizes the burden on the communication network
and allowing for rapid processing. Most importantly, it is
independent of onboard sensors and general among multiple
scenarios. The second challenge for cooperative perception
is robust information fusion in highly dynamic and noisy
environments. For cooperative perception at the object level,
data received from other CAVs must first be converted to
the Ego frame. In reality, the transforms are estimated from
sensor measurements with limited resolution and accuracy,
such as global positioning system (GPS), real-time kinematic
(RTK), and inertial measurement unit (IMU). In most cases,
the estimated transform shared among the CAVs are inaccurate,
disrupting the robust mapping of the Ego vehicle in the process
of cooperation.
This paper focuses on the above challenges, and the main
contributions are summarized as follows:
A distributed V2V-based cooperative perception system is
proposed, optimal transport theory is introduced to auto-
matically correct inaccurate vehicle location and heading
measurements using only object-level bounding boxes.
Experiments show that the proposed system outperform
the state-of-the-art framework on two benchmark datasets
in terms of robustness when location or heading errors
arXiv:2210.06289v2 [cs.MA] 26 Apr 2023
occur, demonstrating the potential of simple object-level
fusion to handle dynamic errors.
The proposed system gives a general solution indepen-
dent of the type and model of onboard sensors, which
can be easily extended to the vehicle-to-everything-based
scenarios, and the proposed system transmits only object-
level information, providing a low-cost solution with a
low communication burden and easy implementation.
The rest of the paper is organized as follows: In Section
II, the related work on cooperative perception and optimal
transport is introduced. The problem is formulated in Section
III. Section IV contains the proposed object-level cooperative
perception framework and detailed algorithms. Experimental
results and discussion are presented in Section V.
II. RELATED WORK
A. Cooperative Perception
Recent studies mainly focus on the aggregation of multi-
agent information to improve the average precision of percep-
tion results. Arnold et al. evaluated the performance of early,
and late fusion, as well as their hybrid combination schemes
in driving scenarios using infrastructure sensors [2]. F-Cooper
introduced feature-level data fusion that extracts and aggregates
the feature map of the raw sensor data by deep learning
networks and then detects objects on the fused feature map
[3]. V2VNet aggregated the feature information received from
nearby vehicles and took the downstream motion forecasting
performance into consideration [4]. OPV2V released the first
large-scale simulated V2V cooperation dataset and presented
a benchmark with 16 implemented models, within which we
implement our models [5]. However, these existing studies are
vulnerable to location and pose errors that are common and
inevitable in real-world applications.
FPV-RCNN tried to introduce a location error correction
module based on key-point matching before feature fusion to
make the model more robust [6]. Vadivelu et al. proposed
a deep learning-based framework to estimate potential errors
[7], but they rely on feature-level fusion, which requires high
computational capacity and is not general among different
scenarios. Gao et al. proposed a graph matching-based method
to identify the correspondence between the cooperative ve-
hicles and can be used to promote the robustness against
spatial errors [8]. They formulated the problem as a non-convex
constrained optimization problem and developed a sampling-
based algorithm to solve it, however, the problem is difficult
to solve and time-consuming, which hinders its application
in the real-world. In this paper, we try to take these errors
into account and design an efficient and robust object-level
cooperative perception framework.
B. Optimal Transport Theory
The optimal transport (OT) theory has been widely used
in the assignment problem in various fields. In the field of
intelligent vehicles, Hungarian algorithm is one of the most
popular variations of optimal transport methods and has been
widely used to match two targets for its effectiveness and
low complexity O(n3). For instance, Cai et al. used it to
assign vehicles to the generated goals in a formation to get
least lane changing overall [9]. For the perception problem,
Sinkhorn’s matrix scaling algorithm [10] is more powerful
for its high efficiency on the graphic processing unit (GPU)
since Cuturi smoothed the classical optimal transport problem
with an entropic regularization term in 2013 [11]. This makes
the GPU available for the OT problem and accelerates its
calculation much more than conventional methods. In recent
years, OT with Sinkhorn has shown strong performance on
several vision tasks with the rapid development of GPU. For
example, Sarlin et al. [12] formulated the assignment of graph
features as a differentiable OT problem and acheived state-
of-the-art performance on image matching. Qin et al. [13]
applied OT theory on the point cloud registration problem and
developed a method with 100 times acceleration with respect to
traditional methods. For the efficiency of OT and the Sinkhorn
algorithm, it is deployed to find the object correspondences
between the observation of the Ego and CAVs.
III. PROBLEM FORMULATION
We consider a distributed cooperative perception scenario,
where any cooperative CAV can share the local state and the
information of the detected objects with the Ego vehicle. Let
X={oi, i = 1,2, .., m}be the object set detected by the
Ego vehicle and Y={oj, j = 1,2, .., n}be the object set
detected by the CAV. For object i, it is represented as a 6D
vector oi=xT
i,θT
iT, where xiR3and θiR3are the
3D position and orientation, respectively.
Cooperative fusion is to transform Yinto the Ego frame
and aggregate it with X. However, errors presented in the
state of both connected vehicles cause an inaccurate relative
transform estimation, which is to be corrected in this paper.
The first challenge is to determine the co-visible region and
associate co-visible objects, given the local state of Ego vehicle
and CAV, as well as noisy measurements Xand Y, provided
that the co-visible objects set Mis achievable. The second
problem is to estimate a transform Fdefined as the function
of rotation matrix RSO(3) and translation vector tR3
between objects in the Xand Yto approach the accurate spatial
transform. It can be formulated as the following optimization
problem
min
FX
(i,j)∈M
||xi− F(yj)||2
(1)
where xidenotes the position vector of oi∈ X (similar as yj
to oj∈ Y), and (i, j)is a possible object pair representing the
same target. Operator F()is defined as F() = R·() + t.
The third task is to complete the fusion using the estimated
transform to maximize the perception capacity of the Ego.
IV. PROPOSED METHOD
The proposed fusion framework consists of four submod-
ules: preprocess, co-visible object association, optimal trans-
form estimation, global fusion and dynamic mapping.
摘要:

ACooperativePerceptionSystemRobusttoLocalizationErrorsZhiyingSong1,FuxiWen1,HailiangZhang1andJunLi1Abstract—Cooperativeperceptionischallengingforsafety-criticalautonomousdrivingapplications.Theerrorsinthesharedpositionandposecauseaninaccuraterelativetransformestimationanddisrupttherobustmappingofth...

展开>> 收起<<
A Cooperative Perception System Robust to Localization Errors Zhiying Song1 Fuxi Wen1 Hailiang Zhang1and Jun Li1 Abstract Cooperative perception is challenging for safety-.pdf

共6页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:6 页 大小:423.03KB 格式:PDF 时间:2025-04-27

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 6
客服
关注