M-LIO Multi-lidar multi-IMU odometry with sensor dropout tolerance Sandipan Das12 Navid Mahabadi3 Maurice Fallon4 Saikat Chatterjee1 We present a robust system for state estimation that fuses

2025-05-02 0 0 4.61MB 7 页 10玖币
侵权投诉
M-LIO: Multi-lidar, multi-IMU odometry with sensor dropout tolerance
Sandipan Das1,2, Navid Mahabadi3, Maurice Fallon4, Saikat Chatterjee1
We present a robust system for state estimation that fuses
measurements from multiple lidars and inertial sensors with
GNSS data. To initiate the method, we use the prior GNSS
pose information. We then perform incremental motion in
real-time, which produces robust motion estimates in a
global frame by fusing lidar and IMU signals with GNSS
translation components using a factor graph framework. We
also propose methods to account for signal loss with a
novel synchronization and fusion mechanism. To validate our
approach extensive tests were carried out on data collected
using Scania test vehicles (5 sequences for a total of
7 Km).
From our evaluations, we show an average improvement of
61% in relative translation and 42% rotational error compared
to a state-of-the-art estimator fusing a single lidar/inertial
sensor pair.
I. INTRODUCTION
State estimation, which is a sub-problem of Simultaneous
Localization and Mapping (SLAM), is a fundamental building
block of autonomous navigation. To develop robust SLAM
systems, proprioceptive (IMU – inertial measurement unit,
wheel encoders) and exteroceptive (camera, lidar, GNSS
– Global Navigation Satellite System) sensing are fused.
Existing systems have achieved robust and accurate results [
1
],
[
2
], [
3
]; however, state estimation in dynamic conditions and
when there is measurement loss or noisy is still challenging.
Compared to visual SLAM, lidar-based SLAM has higher
accuracy as lidar range measurements of up to 100m directly
enable precise motion tracking. Moreover, owing to the falling
cost of lidars, mobile platforms are increasingly equipped with
multiple lidars [
4
], [
5
], to give complementary and complete
360
sensor coverage (see Fig. 1, for our data collection
platform). This also improves the density of measurements
which may be helpful for state estimation in degenerate
scenarios such as tunnels or straight highways. We can also
estimate the reliability of a state estimator by computing the
covariance of measurement errors produced from multiple
lidar measurements.
Meanwhile, IMUs are low cost sensors which can be used
to estimate a motion prior for lidar odometry by integrating
the rotation rate and accelerometry measurements. However,
IMUs suffer from bias instability and are susceptible to noise.
An array of multiple IMUs (MIMUs) could provide enhanced
signal accuracy with bias and noise compensation as well as
1KTH EECS, Sweden. {sandipan,sach}@kth.se
2Scania, Sweden. {sandipan.das}@scania.com
3Stockholm, Sweden. n.mahabadi@gmail.com
4Oxford Robotics Institute, UK. mfallon@robots.ox.ac.uk
B
Lidar Field of View
Fig. 1. Illustration of the four lidars with their embedded IMUs positioned
around the data collection vehicle. The vehicle base frame
B
, is located
at the center of the rear axle. The sensor frames of the lidars are:
LFR
,
LFL
,
LRR
and
LRL
, whereas, the sensor frames of the IMUs are:
IFR
,
IFL
,
IRR
and
IRL
.
FR
: front-right,
FL
: front-left,
RR
: rear-right and
RL: rear-left.
increasing operational robustness to sensor dropouts. Multi-
lidar odometry [
6
], [
7
] and MIMU odometry [
8
], [
9
] have
been studied separately; however to the best of our knowledge
the fusion of the combined set has not yet been explored.
A. Motivation
As our vehicles are equipped with a GNSS system,
we perform the state estimation in global frame and take
advantage of it to limit the drift rate. Since GNSS information
is unreliable in urban environments (‘urban canyon’) or in
underground scenarios (such as mining) we also need to fuse
information from onboard sensors (which might be susceptible
to signal loss) to create robust state estimates.
To achieve this, we have identified two broad problems:
Problem 1: State estimation over long time horizons with
onboard sensing inherently suffers from drift. Problem 2:
Sensor signals are susceptible to loss, due to networking or
operational issues, which might affect reliability of the state
estimates using onboard sensing.
In our work we have addressed Problem 1) by fusing
the onboard state estimator with GNSS-based estimates. For
Problem 2) we provide our formulation for the fusion of
multiple lidars and MIMU to create robust signals under
noisy conditions, which can give robustness both to failure
of an individual lidar or IMU sensor as well as expanding
the lidar field-of-view.
arXiv:2210.01154v2 [cs.RO] 9 Oct 2022
B. Contribution
Our work is motivated by the broad literature in lidar and
IMU based state estimation. Our proposed contributions are:
Multi-lidar odometry using a single fused local submap
and handling lidar dropout scenarios.
MIMU fusion which compensates for the Coriolis effect
and accounts for potential signal loss.
A factor graph framework to jointly fuse multiple lidars,
MIMUs and GNSS signals for robust state estimation
in a global frame.
Experimental results and verification using data collected
from Scania vehicles with the sensor setup shown in
Fig. 2, with FoV schematics similar to Fig. 1.
II. RELATED WORK
There have been multiple studies of lidar and IMU-based
SLAM after the seminal LOAM paper by Zhang et al. [
10
]
which is itself motivated by generalized-ICP work by Se-
gal et al. [
11
]. In our discussion we briefly review the relevant
literature.
A. Direct tightly coupled multi-lidar odometry
To develop real-time SLAM systems simple edge and plane
features are often extracted from the point clouds and tracked
between frames for computational efficiency [
3
], [
12
], [
13
].
Using IMU propagation, motion priors can then be used to
enable matching of point cloud features between key-frames.
However, this principle cannot be applied to featureless
environments. Hence, instead of feature engineering, the
whole point cloud is often processed which has an analogy to
processing the whole image in visual odometry methods such
as LSD-SLAM, [
14
], which is known as direct estimation.
To support direct methods, recently Xu et al. [15] proposed
the ikd-tree in their Fast-LIO2 work which efficiently inserts,
updates, searches and filters to maintain a local submap.
The ikd-tree achieves its efficiency through “lazy delete” and
“parallel tree re-balancing” mechanisms.
Furthermore, instead of point-wise operations the authors
of Faster-LIO [
16
] proposed voxel-wise operations for point
cloud association across frames and reported improved
efficiency. In our work we also maintain an ikd-tree of the
fused lidar measurements and tightly couple the relative lidar
poses, IMU preintegration and GNSS prior in our propose
estimator. Since, we jointly estimate the state based on the
residual cost function built upon the multiple modalities we
consider this a tightly coupled system.
While there have been many studies of state estimation
using single lidar and IMU, there is limited literature available
for fusing multi-lidar and MIMU systems. Our idea closely
resembles M-LOAM [
7
], where state estimation with multiple
lidars and calibration was performed. However, the working
principles are different as M-LOAM is not a direct method
and MIMU system is not considered in their work.
Finally, most authors do not address how to achieve
reliability in situations of signal loss — an issue which is
important for practical operational scenarios.
Fig. 2. Reference frames conventions for our vehicle platform. The world
frame
W
is a fixed frame, while the base frame
B
, as shown in Fig. 1, is
located at the rear axle center of the vehicle. Each sensor unit contains the
two optical frames
C
, an IMU frame,
I
, and lidar frame
L
. The cameras are
shown for illustration only and they are not used for this work.
III. PROBLEM STATEMENT
A. Sensor platform and reference frames
The sensor platform with its corresponding reference
frames is shown in Fig. 2 along with the illustrative sensor
fields-of-view in Fig. 1. Each of the sensor housings contain a
lidar with a corresponding embedded IMU and two cameras.
Although we do not use the cameras in this work they are
illustrated here to show the full sensor setup. We used logs
from a bus and a truck with similar sensor housings for our
experiments. The two lower mounted modules from the rear,
present in both the vehicles, are not shown here in the picture.
The embedded IMUs within the lidar sensors are used to
form the MIMU setup.
Now we describe the necessary notation and reference
frames used in our system according to the convention of
Furgale [
17
]. The vehicle base frame,
B
is located on the
center of the rear-axle of the vehicle. Sensor readings from
GNSS, lidars, cameras and IMUs are represented in their
respective sensor frames as
G
,
L(k)
,
C(k)
and
I(k)
respectively.
Here,
k[FL, FR, RL, RR]
denotes the location of the
sensor in the vehicle corresponding to front-left, front-right,
rear-left and rear-right respectively. The GNSS measurements
are reported in world fixed frame,
W
and transformed to
B
frame by performing a calibration routine outside the
scope of this work. In our discussions the transformation
matrix is denoted as,
T=R3×3t3×1
0>1SE(3)
and
RRT=I3×3, since the rotation matrix is orthogonal.
B. Problem formulation
Our primary goal is to estimate the position
t
W WB
, orienta-
tion
R
W WB
, linear velocity
v
W WB
, and angular velocity
ω
W WB
,
of the base frame
B
, relative to the fixed world frame
W
.
Additionally, we also estimate the MIMU biases
bg
B,ba
B
expressed in
B
frame, as that is where it can be sensed. Hence,
our estimate of vehicle’s state xiat time ti, is denoted as:
xi= [Ri,ti,vi,ωi,ba
i,bg
i]SE(3) ×R15,(1)
where, the corresponding measurements are in the frames
mentioned above.
IV. METHODOLOGY
A. Initialization
To provide an initial pose we use the GNSS measurements,
T
W WG
and determine an initial estimate of the starting yaw and
摘要:

M-LIO:Multi-lidar,multi-IMUodometrywithsensordropouttoleranceSandipanDas1;2,NavidMahabadi3,MauriceFallon4,SaikatChatterjee1WepresentarobustsystemforstateestimationthatfusesmeasurementsfrommultiplelidarsandinertialsensorswithGNSSdata.Toinitiatethemethod,weusethepriorGNSSposeinformation.Wethenperformi...

展开>> 收起<<
M-LIO Multi-lidar multi-IMU odometry with sensor dropout tolerance Sandipan Das12 Navid Mahabadi3 Maurice Fallon4 Saikat Chatterjee1 We present a robust system for state estimation that fuses.pdf

共7页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:7 页 大小:4.61MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 7
客服
关注