Uncertainty-Aware Lidar Place Recognition in Novel Environments

2025-04-15 0 0 1.76MB 8 页 10玖币
侵权投诉
Uncertainty-Aware Lidar Place Recognition in Novel Environments
Keita Mason1,2, Joshua Knights1,2, Milad Ramezani1, Peyman Moghadam1,2, Dimity Miller2
Abstract State-of-the-art lidar place recognition models
exhibit unreliable performance when tested on environments
different from their training dataset, which limits their use in
complex and evolving environments. To address this issue, we
investigate the task of uncertainty-aware lidar place recognition,
where each predicted place must have an associated uncertainty
that can be used to identify and reject incorrect predictions.
We introduce a novel evaluation protocol and present the
first comprehensive benchmark for this task, testing across
five uncertainty estimation techniques and three large-scale
datasets. Our results show that an Ensembles approach is
the highest performing technique, consistently improving the
performance of lidar place recognition and uncertainty estima-
tion in novel environments, though it incurs a computational
cost. Code is publicly available at https://github.com/csiro-
robotics/Uncertainty-LPR.
I. INTRODUCTION
Localisation is a crucial capability of mobile robots –
with an understanding of its location in a map, a robot
can navigate to new locations, monitor an environment, and
collaborate with other entities. Lidar place recognition (LPR)
algorithms use point clouds to enable robot localisation – a
robot can compare a recently captured point cloud with a
map of previous point clouds to identify its current loca-
tion, where the map can be generated on-the-fly or offline.
These recognised locations, i.e., revisit areas, can be used as
loop closure constraints in a Simultaneous Localisation and
Mapping (SLAM) algorithm to mitigate drift, or provide re-
localisation in GPS-denied environments.
State-of-the-art approaches to LPR utilise deep neural
networks [1]–[6], and exhibit impressive localisation per-
formance when tested on environments included in the
training dataset [4]–[6]. However, when tested on a novel
environment,i.e., an environment not in the training dataset,
their performance drops substantially. As an example, a
MinkLoc3D model [4] can achieve 93% recall when trained
and tested on urban road scenes from the United Kingdom,
but this drops to 61% recall when tested on urban road scenes
from South Korea. This performance degradation highlights
a critical weakness of state-of-the-art LPR techniques (and
deep neural networks in general) – an inability to generalise
to conditions not represented in the training dataset.
Uncertainty-aware deep networks are an established ap-
proach for enabling reliable performance in novel condi-
tions [7]–[9]. Alongside each prediction, a network provides
1Authors are with the Robotics and Autonomous Systems, DATA61,
CSIRO, Brisbane, QLD 4069, Australia. 2Authors are with the School
of Electrical Engineering, Queensland University of Technology (QUT),
Brisbane, Australia.
Author contact emails: {joshua.knights, milad.ramezani,
peyman.moghadam}@data61.csiro.au, {kk.graves,
d24.miller}@qut.edu.au
Fig. 1: The bottom traversal shows the performance of MinkLoc3D
[4] when trained on Oxford road scenes, but tested on road scenes
from the Daejeon Convention Centre in South Korea. Tested on
this novel environment, MinkLoc3D predicts many false revisits. In
the top traversal, we show uncertainty-aware LPR with Ensembles,
where uncertainty is used to reject the 50% most uncertain queries.
an estimate of its uncertainty, where high uncertainty indi-
cates the network is more likely to make a mistake. While
uncertainty-aware deep networks have been explored in many
computer vision fields and robotics applications [9]–[18], no
existing research explores uncertainty estimation for LPR.
In this paper, we introduce uncertainty-aware lidar place
recognition and present a comprehensive benchmark of this
task. Our contributions are as follows:
1) We formalise the uncertainty-aware LPR task for the
prediction of lidar-based localisation failure (Sec. III).
2) Drawing inspiration from state-of-the-art techniques in
related fields, we implement one standard baseline and
four uncertainty-aware baselines for the LPR setting
(Sec. III-C).
3) We introduce an evaluation protocol that utilises three
large-scale datasets and a range of metrics to quantify
place recognition ability and uncertainty estimation for
LPR in novel environments (Sec. IV).
4) We analyse the performance of all baseline methods on
this evaluation protocol, exploring how different error
types influence performance, as well as computational
cost (Sec. V).
We hope that this work enables and stimulates further
research into this important area by providing an extensive
evaluation protocol and initial benchmark for comparison.
II. RELATED WORK
Before introducing uncertainty-aware LPR, we first con-
textualise the work by reviewing the existing state-of-the-art
in LPR, and then discussing existing research into uncer-
tainty estimation in retrieval tasks.
arXiv:2210.01361v3 [cs.CV] 12 Jul 2023
A. Lidar Place Recognition
LPR utilising 3D point clouds has been significantly ex-
plored in the last few years. LPR approaches identify similar
places (revisited areas) by encoding high-dimensional point
clouds into discriminative embeddings (often referred to as
descriptors). Handcrafted LPR methods [19]–[23] generate
local descriptors by segmenting point clouds into patches, or
global descriptors that show the relationship between all the
points in a point cloud.
Recent state-of-the-art LPR approaches have been dom-
inated by deep learning-based architectures due to their
impressive performance [2]–[6], [24]–[28]. These approaches
typically utilise a backbone architecture to extract local
features from the point cloud, which are then aggregated into
a global descriptor. The specific design of these components
varies significantly between different works; PointNet [24],
graph neural networks [3], transformers [2], [6], [25], and
sparse-voxel convolutional networks [4]–[6], [26] have all
been proposed as local feature extractors, and aggregation
methods include NetVLAD [29], Generalised Mean Pooling
(GeM) [30] and second-order pooling [5], [31].
B. Uncertainty Estimation for Retrieval Tasks
Though there are a number of works exploring uncertainty
estimation in lidar object detection [15], [17], [18], [32],
[33] and point cloud segmentation [16], no existing works
explore uncertainty estimation for LPR. While recent work
by Knights et al. [34] shares a similar motivation to our work
– reliable performance in novel environments – they explore
incremental learning and specifically the issue of catastrophic
forgetting.
Image retrieval is a field of computer vision that shares
a similar problem setup to LPR (though notably operat-
ing on images rather than point clouds). When estimat-
ing uncertainty for image retrieval, recent works learn an
uncertainty estimate by adding additional heads to their
network architecture [13], [14], [35]. Shi et al. [13] examine
uncertainty-aware facial recognition, where face embeddings
are modelled as Gaussian distributions by learning both a
mean vector and variance vector.
Warburg et al. [35] follow a similar approach, introducing
a ‘Bayesian Triplet Loss’ to extend training to also include
negative probabilistic embeddings. Most recently, STUN [14]
was proposed for uncertainty-aware visual place recognition.
STUN presents a student-teacher paradigm to learn a mean
vector and variance vector, using the average variance to
represent uncertainty [14]. Given the high performance of
these approaches in the related image retrieval task, we adapt
several of these methods to the LPR setting to serve as
baselines for our benchmark.
III. METHODOLOGY
We first define the LPR task, and then formalise
uncertainty-aware LPR. Following this, we introduce the
baseline methods used for our benchmark.
A. Lidar Place Recognition
During LPR evaluation, a database contains point clouds
with attached location information. This database can be a
previously curated map or can be collected online as an
agent explores an environment. Given a query, i.e., a new
point cloud from an unknown location, an LPR model must
localise the query by finding the matching point cloud in
the database. If the predicted database location is within a
minimum global distance to the true query location, the pre-
diction is considered correctly recalled. In this configuration,
LPR performance is evaluated from the average recall of all
tested queries [1], [4], [6].
We are motivated by the observation that perfect recall
in an LPR setting does not currently exist, and may not be
attainable in some applications – when operating in dynamic
and evolving environments, or dealing with sensor noise,
the potential for error always exists. In this case, we argue
that LPR models should additionally be able to estimate
uncertainty in their predictions, i.e.,know when they don’t
know. We formalise this below as uncertainty-aware LPR.
B. Uncertainty-aware Lidar Place Recognition
In uncertainty-aware LPR, each predicted match between a
query and database entry should be accompanied by a scalar
uncertainty estimate U. This uncertainty represents the lack
of confidence in a predicted location.
Following the existing LPR setup, the primary goal in
uncertainty-aware LPR is to maximise correct localisations
(i.e., recall). Uncertainty-aware LPR extends on this by addi-
tionally requiring models to identify incorrect predictions by
associating high uncertainty. We formulate this as a binary
classification problem, where Uis compared to a decision
threshold λto classify whether an LPR prediction is correct
or incorrect:
Fλ(U) = Correct, U λ
Incorrect, U > λ . (1)
Incorrect predictions can arise for two reasons: (1) the
query is from a location that is not present in the database,
or (2) the query is from a location in the database, but the
LPR model selects the incorrect database match. We refer to
these two error types as ‘no match error’ and ‘incorrect match
error’ respectively, and analyse them in detail in Sec. V-B.
C. Baseline Approaches
To benchmark uncertainty-aware LPR, we adapt a number
of uncertainty estimation techniques existing in related fields
to the LPR setting.
Standard LPR Network: As explored in Sec. II-A, state-
of-the-art LPR techniques utilise a deep neural network to
reduce a point cloud to a descriptor d. Given a database
of Nprevious point clouds and locations, a standard LPR
network converts this to a database Dof N L-dimensional
descriptors, D={diRL}N
i=1. During evaluation, a query
point cloud PqRM×3, with Mpoints, is reduced to a
query descriptor dqRL. This query descriptor is compared
摘要:

Uncertainty-AwareLidarPlaceRecognitioninNovelEnvironmentsKeitaMason1,2,JoshuaKnights1,2,MiladRamezani1,PeymanMoghadam1,2,DimityMiller2Abstract—State-of-the-artlidarplacerecognitionmodelsexhibitunreliableperformancewhentestedonenvironmentsdifferentfromtheirtrainingdataset,whichlimitstheiruseincomplex...

展开>> 收起<<
Uncertainty-Aware Lidar Place Recognition in Novel Environments.pdf

共8页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:学术论文 价格:10玖币 属性:8 页 大小:1.76MB 格式:PDF 时间:2025-04-15

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 8
客服
关注