Vision-based GNSS-Free Localization for UA Vs in the Wild Marius-Mihail Gurguy Jorge Pe na Queraltay Tomi Westerlundy

2025-05-06 1 0 6.36MB 6 页 10玖币
侵权投诉
Vision-based GNSS-Free Localization for
UAVs in the Wild
Marius-Mihail Gurgu, Jorge Pe˜
na Queralta, Tomi Westerlund
Turku Intelligent Embedded and Robotic Systems (TIERS) Lab
University of Turku, Finland.
Emails: 1{mmgurg, jopequ, tovewe}@utu.fi
Abstract—Considering the accelerated development of Un-
manned Aerial Vehicles (UAVs) applications in both industrial
and research scenarios, there is an increasing need for localizing
these aerial systems in non-urban environments, using GNSS-
Free, vision-based methods. Our paper proposes a vision-based
localization algorithm that utilizes deep features to compute
geographical coordinates of a UAV flying in the wild. The method
is based on matching salient features of RGB photographs
captured by the drone camera and sections of a pre-built
map consisting of georeferenced open-source satellite images.
Experimental results prove that vision-based localization has
comparable accuracy with traditional GNSS-based methods,
which serve as ground truth. Compared to state-of-the-art
Visual Odometry (VO) approaches, our solution is designed for
long-distance, high-altitude UAV flights. Code and datasets are
available at https://github.com/TIERS/wildnav.
Index Terms—UAV; MAV; GNSS-Free; GNSS-denied;
Vision-based localization; Photogrammetry; Computer vision;
Perception-based localization; Visual Odometry
I. INTRODUCTION
Unmanned Aerial Vehicles (UAVs) are currently used in
a wide range of scenarios and commercial applications. They
already have reliable localization methods based on the Global
Navigation Satellite System (GNSS) [1], Visual-Inertial Simul-
taneous Localization and Mapping (VI-SLAM) [2], or Ultra-
wideband (UWB) systems [3]. The first method is usually used
in outdoor environments, where it is relatively simple to use
a GNSS sensor to obtain accurate positioning data [4]. The
second and third methods prove their utility in environments
where GNSS signal is unreliable, for example inside buildings,
where localization systems such as UWB [5], [6] or VI-SLAM
are preferred [7].
Autonomy of aerial drones and their hardware capability
have significantly improved in the last years [8], enabling
them to safely fly autonomously over relatively long distances,
beyond visual line of sight (BVLOS). However, less attention
had been given to providing accurate positioning data during
long distance UAV flights without using GNSS, which would
lead to more resilient solutions. This is an essential matter
——————————————————————————————
This research work is supported by the Academy of Finland’s AutoSOS
and AeroPolis projects (Grant No. 328755 and 348480) and by the Finnish
Ministry of Defence’s Scientific Advisory Board for Defence (MATINE)
project WildNav.
Fig. 1: Conceptual illustration of the proposed approach. Drone
images are compared to a set of reference satellite images and a
match is found based on deep features from Superglue [11].
since UAVs are mission-critical systems in need of a failsafe
mechanism that can be automatically engaged when GNSS
signal becomes unavailable due to various reasons, such as
jamming or spoofing [9].
At the same time, image processing capabilities using deep
neural networks have been constantly improving in the last
decade, enabling even real-time recognition of objects and
segmentation. This observation also holds for the processing of
satellite image data, where artificial intelligence (AI) models
are trained to differentiate between important objects such as
buildings and rivers [10].
Building on top of available open source technology and
algorithms [11], [12], this paper aims to provide a reliable
positioning mechanism for UAVs by only using RGB pho-
tographs provided by a camera mounted on the drone and
open-source satellite images of the flight area. The method
is based on deep neural image segmentation that enables the
localization of the UAV by extracting recognizable features
of buildings, roads, rivers and forest edges. These features are
further matched to static satellite images using a robust method
based on a graph neural network model [11].
Fig. 1 shows the core idea of the proposed visual-based
localization method, where an UAV flying at a high altitude
– provided that the ground surface below it has enough
landmarks with salient features – is able to match the cam-
era stream with onboard, georeferenced, open-source satellite
arXiv:2210.09727v1 [cs.RO] 18 Oct 2022
images.
The paper is organized in 6 sections as follows. Section II
provides a brief overview of related works on which the
current paper is based. Section III explains the motivation
and applicability of the implemented vision-based algorithm,
while Section IV presents the approach used for determining
the absolute geographical coordinates of a UAV. Section V dis-
cusses the experimental localization results. Finally, Section VI
concludes the work and outlines future research directions.
II. BACKGROUND
Previous studies directly related to our paper include seman-
tic segmentation based path planning [13], localization using
open source Google Earth (GE) aerial images [14] and pose
estimation with neural networks trained with georeferenced
satellite photographs [15]. In addition to these, an open source
image segmentation model originally used for building virtual
worlds [16] can prove its utility in providing useful input both
for localization and navigation purposes, as proposed in [13].
Additionally, a graph neural network that provides fea-
ture computation and matching for outdoor images, Super-
Glue [11], became part of the implementation in our proposed
localization algorithm due to its remarkable performance of
matching features in photographs which significantly differ
in perspective and lighting conditions. The model proved
its efficient application not only for matching features, but
also for the perspective transformations (namely homography)
used in computing geographical coordinates when running the
implemented vision-based localization algorithm. Before se-
lecting Superglue as a feature matcher between drone camera
photographs and satellite images, template matching [17] and
SIFT features [18] were also taken into consideration. The
latter two options were dropped due to their relatively high
computation time and inaccuracy, compared to [11].
Another remarkable paper is [12], which uses an advanced
template-based matching algorithm to compute the pose es-
timation of a UAV using only images. Because of insuffi-
cient computation resources at the time, no implementation
is provided on the onboard computer of a drone. Another
major difference, compared to our approach, is that the drone
navigates specifically urban environments, where computing
and matching visual features is much less of a challenge,
compared to natural, in the wild, flying areas.
Currently, the state-of-the-art of navigation approaches in
a priori unknown environments are Simultaneous Localiza-
tion and Mapping (SLAM) algorithms, which have arguably
reached a high-degree of robustness and reliability in the past
decade [19]. An important building block of SLAM algo-
rithms are VO methods [20], which allow UAVs to accurately
determine their position while navigating new environments.
However, one major limiting factor of these approaches is
that they make the assumption the UAV is flying at a low
enough altitude that an RGB monocular or stereo camera can
easily track position shifts in detected features from frame to
frame. Our work focuses on high-altitude flight (120 meters), a
situation where a different approach for localization is needed,
as presented in Section III.
III. PROVIDING GNSS-FREE VISION-BASED
LOCALIZATION
Our main goal is developing a localization algorithm that
does not rely on GNSS for long-distance flights, but only
on a monocular wide-angle camera. This kind of approach
proves its utility in situations where GNSS signal cannot be
reliably used and as a failsafe alternative that enables the drone
to reach its goal position or at least land safely in a pre-
established location. New commercial implementations such
as autonomous drone delivery could use the new localization
method to improve the reliability of navigation.
The core attribute of the project is the environment in which
vision-based localization is provided: in the wild, denoting
natural (non-urban) environments where artificial structures
such as buildings and roads are sparse. This characteristic
transforms what would be a trivial feature matching and ho-
mography computation problem inside a city into a challeng-
ing process, due to the difficulty of finding salient features in
natural environments. Nevertheless, the final implementation
is able to provide accurate localization results using a neural
network for feature matching between drone photographs and
satellite images.
Another significant feature of the implemented localization
algorithm is that the pre-mapping process does not require the
UAV to fly. The map is built using exclusively open-source
satellite images, with the objective of enabling autonomous
localization in any area where the drone can fly, assuming it
is legally and physically possible (due to harsh environmental
conditions).
IV. METHODOLOGY
Accurate feature matching of images is only useful if
the drone camera photographs can be precisely linked to
geographical coordinates. An important assumption of our
proposed localization method is that the flight area of the UAV
is known a priori, so a map for that specific zone can be built
and uploaded to the onboard computer for offline use. The
map is composed of rectangular sections representing RGB
satellite images with an approximate resolution of 1400×1200
pixels, collected from GE. Each one of these sections are
collected from the same perspective, with the camera view
perpendicular to the ground surface and from an altitude that
offers a similar field of view to a wide angle camera. One
example of these sections can be observed in Fig. 2, where
the transparent white rectangle represents the georeferenced
map tile. There is a linear relation (see Equations 1 and 2)
between the pixel coordinates of the image file and absolute
geographical coordinates.
A. Georeferencing map sections
Because the flight area is generally too large to be repre-
sented as only one image file, the map is split into different
section, each one of them with two distinct geographical
摘要:

Vision-basedGNSS-FreeLocalizationforUAVsintheWildMarius-MihailGurguy,JorgePe˜naQueraltay,TomiWesterlundyyTurkuIntelligentEmbeddedandRoboticSystems(TIERS)LabUniversityofTurku,Finland.Emails:1fmmgurg,jopequ,toveweg@utu.Abstract—ConsideringtheaccelerateddevelopmentofUn-mannedAerialVehicles(UAVs)applic...

展开>> 收起<<
Vision-based GNSS-Free Localization for UA Vs in the Wild Marius-Mihail Gurguy Jorge Pe na Queraltay Tomi Westerlundy.pdf

共6页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:6 页 大小:6.36MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 6
客服
关注