1 Real-Time Dynamic Map with Crowdsourcing Vehicles in Edge Computing

2025-04-28 0 0 1.31MB 10 页 10玖币
侵权投诉
1
Real-Time Dynamic Map with Crowdsourcing
Vehicles in Edge Computing
Qiang Liu, Member, IEEE, Tao Han, Senior Member, IEEE,
Jiang (Linda) Xie, Fellow, IEEE, and BaekGyu Kim, Member, IEEE,
Abstract—Autonomous driving perceives surroundings with
line-of-sight sensors that are compromised under environmental
uncertainties. To achieve real time global information in high defi-
nition map, we investigate to share perception information among
connected and automated vehicles. However, it is challenging
to achieve real time perception sharing under varying network
dynamics in automotive edge computing. In this paper, we
propose a novel real time dynamic map, named LiveMap to detect,
match, and track objects on the road. We design the data plane
of LiveMap to efficiently process individual vehicle data with
multiple sequential computation components, including detection,
projection, extraction, matching and combination. We design the
control plane of LiveMap to achieve adaptive vehicular offloading
with two new algorithms (central and distributed) to balance the
latency and coverage performance based on deep reinforcement
learning techniques. We conduct extensive evaluation through
both realistic experiments on a small-scale physical testbed and
network simulations on an edge network simulator. The results
suggest that LiveMap significantly outperforms existing solutions
in terms of latency, coverage, and accuracy.
Index Terms—Dynamic Map, Edge Computing, Autonomous
Driving
I. INTRODUCTION
AUTONOMOUS driving and advanced driving assistance
system (ADAS) are being evolved with the development
of modern machine learning and pervasive parallel comput-
ing. Vehicles leverage a variety of sensors, e.g., camera and
LiDAR, to perceive surroundings, and use onboard computers
to understand the collected raw data in real time, e.g., semantic
segmentation and object recognition. With the high-definition
(HD) map, advanced vehicular control algorithms accurately
relocalize the vehicle and can tackle road situations with the
perceived environmental context, e.g., pedestrians and lanes.
Achieving highly reliable and safe driving, however, is very
challenging, based on non-real-time HD map and individual
vehicle perception. On the one hand, the HD map [2], in-
cluding geometric, semantic, and map-prior layer, has no real
time road information, e.g., pedestrian and vehicles, in the
time scale of subseconds. On the other hand, the perceptions
of individual vehicles are limited and might be compromised
Qiang Liu is with the School of Computing, University of Nebraska-
Lincoln. E-mail: qiang.liu@unl.edu
Tao Han is with the Department of Electrical and Computer Engineering,
New Jersey Institute of Technology. E-mail: tao.han@njit.edu
Jiang (Linda) Xie is with the Department of Electrical and Com-
puter Engineering, University of North Carolina at Charlotte. E-mail:
linda.xie@uncc.edu
BaekGyu Kim is with the Department of Information and Communication
Engineering, Daegu Gyeongbuk Institute of Science and Technology. E-mail:
bkim@dgist.ac.kr
Partial contents of this article appeared in IEEE International Conference
on Computer Communications 2021 [1].
LiveMap
Transportation Systems
Edge Servers
Radio Access Points
Fig. 1: An example of automotive edge computing.
under a variety of environmental uncertainties such as weather
and occlusion [3]. For example, existing line-of-sight vehicle
sensors are with limited sensing ranges, which indicates that
they cannot perceive information in occluded areas [4]. Con-
sidering a car follows a truck that blocks the car’s front sensor,
passing the truck without the information about the opposite
lane is unsafe.
Connected and automated vehicles (CAVs) emerge in re-
cent years to connect vehicles [5], [6] via advanced wire-
less technologies, e.g., 5G and beyond, with pervasive edge
computing infrastructures [7], [8], e.g., edge servers in radio
access networks (RAN). The Automotive Edge Computing
Consortium estimates that more than 50% of all cars on the
road in the United States will have connected features by
2025 [9]. Various onboard sensors of vehicles, e.g., cameras
and LiDAR, can be leveraged to construct global information
via crowdsourcing. By using edge servers as the hub, the
information perceived by individual vehicles is seamlessly
collected, processed, and shared among vehicles and infras-
tructures with ultra-low latency.
However, it is non-trivial to share perception data among
CAVs because of the constrained network infrastructures
and resources (e.g., spectrum and servers). For example,
the perception of vehicles may have duplicated information
due to their heavily overlapped sensing ranges in a dense
urban scenario. In addition, the uplink transmission of vehicle
perception data, e.g., point clouds, demands a tremendous
data rate which may overwhelm mobile networks [10]. Edge
servers, that support hundreds of vehicles if not more, experi-
ence fast-changing traffic and workloads under varying vehicle
trajectories. Therefore, it is imperative to design intelligent
network management solutions to achieve real time perception
sharing under constrained network resources in automotive
edge computing.
In this paper, we propose LiveMap, a new real-time dynamic
map as shown in Fig. 1. LiveMap achieves the detection,
arXiv:2210.05034v1 [cs.DC] 10 Oct 2022
2
Object
Detection
Feature
Extraction
Combination
& Tracking
Object
Matching
detections
Global Map
features
RGB-D
image
Data
Acquisition
Service
Request
Object
Projection
configs
object
images
historical objects
Vehicle Server
Centralized scheme
Data
Plane
Control
Plane
objects
Local Map
objects
D-HEAD
Algorithm HEAD Algorithm
Sensors
Fig. 2: The overview of LiveMap. The data plane is to process sensor data for detecting, matching and tracking objects. The control plane
is to manage networks for accelerating transmission and computation of vehicle offloadings.
matching, and tracking of objects on the road in the time
scale of subseconds via crowdsourcing data from CAVs.
We design LiveMap to achieve an efficient data plane for
vehicle data processing and an intelligent control plane for
vehicle offloading decisions. The data plane is composed of
object detection, object projection, feature extraction, object
matching, and object combination. In particular, we design to
improve the object detection with new neural network pruning
techniques, build concise feature extraction with variational
autoencoder techniques, optimize the feature matching with
a novel location-aware distance function, and increase the
combination accuracy with a new confidence-weighted com-
bination method. The control plane enables adaptive vehicle
offloading, e.g., offloading the computations from vehicles
to servers, under varying network dynamics. We design two
algorithms, that apply in central and distributed scenarios,
to minimize the latency of offloadings while satisfying the
requirement of map coverage. We design these two algorithms
based on deep reinforcement learning (DRL) that optimize the
vehicle scheduling and offloading decision of individual CAVs.
In addition, we implement LiveMap on a small-scale physical
testbed with multipleJetRacers (Nvidia Jetson Nano), a 5GHz
WiFi router, and an edge server with Nvidia GPU.
The main contributions of this paper are listed:
We design a new real time dynamic map (LiveMap) via
crowdsourcing sensor data of CAVs in automotive edge
computing networks.
We develop an efficient data plane with sequential process-
ing of sensor data for reducing the processing delay and
improving detection accuracy.
We design an intelligent control plane with two new al-
gorithms that improve the latency performance without
compromising the map coverage.
We develop an edge network simulator and prototype
LiveMap on a small-scale physical testbed.
We evaluate LiveMap via both experiments and simulations,
and the results validate its superior performance.
II. LiveMap OVERVIEW
In Fig. 2, we overview the architecture of LiveMap, which
includes the data plane, i.e., sensor data processing for de-
tecting, matching and tracking objects, and the control plane,
i.e., network management for accelerating transmission and
computation.
The data plane is composed of several sequential process-
ing components. The acquisition component retrieves RGB-
D images from CAV sensors and its relocalization. The
detection component detects possible objects in RGB images
by exploiting state-of-the-art object detection framework, i.e.,
YOLOv3 [11]. The projection component projects the detected
objects from pixel coordinates to world coordinates based
on the depth information and camera-to-world transformation
matrix. The extraction component extracts visual features from
cropped object images by using a variational autoencoder. The
matching component matches detected objects in either the
local or global map according to their visual features and geo-
locations. The combination component combines multi-viewed
objects by integrating a variety of attributes, e.g., confidence
and geo-location. Note that all components, except acquisition
and combination, can be flexibly executed in either CAVs or
edge servers, according to the control plane. Finally, the global
map will be updated, where new updates will be broadcasted
to all vehicles for updating their local maps.
The control plane includes a central and a distributed
scheme. In the central scheme, a vehicle sends a service
request along with its local state to the edge server. The HEAD
algorithm optimizes the scheduling of this vehicle under the
current map coverage, and determines the offloading decision
with a central DRL agent under the global state if the vehicle
is scheduled. In the distributed scheme, the vehicle invokes the
D-HEAD algorithm independently to optimize its scheduling
and offloading decision according to the current local state.
The vehicle starts the data plane according to the offloading
decision if it is scheduled.
III. THE DESIGN OF DATA PLANE
The data plane is designed to efficiently process vehicle data
in terms of processing delay and detection accuracy.
A. Data Acquisition
We develop the acquisition component to acquire the sensor
data, i.e., LiDAR and RGB-D images. Without loss of gen-
erality, we consider the RGB and depth images from RGB-
D cameras, e.g., Intel RealSense D435. Besides, it obtains
the accurate vehicle location in world coordinates, which de-
pends on either high-accuracy GPS or advanced relocalization
algorithms such as ORB SLAM2 [12]. The accurate vehicle
location is necessitated for combining the multi-viewed objects
detected by multiple vehicles.
B. Object Detection
We design the detection component to detect transportation
objects from RGB images, e.g., trucks and pedestrians, where
detection results include the object classes, probability, and 2D
摘要:

1Real-TimeDynamicMapwithCrowdsourcingVehiclesinEdgeComputingQiangLiu,Member,IEEE,TaoHan,SeniorMember,IEEE,Jiang(Linda)Xie,Fellow,IEEE,andBaekGyuKim,Member,IEEE,Abstract—Autonomousdrivingperceivessurroundingswithline-of-sightsensorsthatarecompromisedunderenvironmentaluncertainties.Toachieverealtimegl...

展开>> 收起<<
1 Real-Time Dynamic Map with Crowdsourcing Vehicles in Edge Computing.pdf

共10页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:10 页 大小:1.31MB 格式:PDF 时间:2025-04-28

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 10
客服
关注