Density Planner Minimizing Collision Risk in Motion Planning with Dynamic Obstacles using Density-based Reachability Laura Lützow12 Yue Meng2 Andres Chavez Armijos3and Chuchu Fan2_2

2025-05-06 0 0 441.33KB 8 页 10玖币
侵权投诉
Density Planner: Minimizing Collision Risk in Motion Planning
with Dynamic Obstacles using Density-based Reachability
Laura Lützow1,2, Yue Meng2, Andres Chavez Armijos3and Chuchu Fan2
Abstract Uncertainty is prevalent in robotics. Due to
measurement noise and complex dynamics, we cannot
estimate the exact system and environment state. Since
conservative motion planners are not guaranteed to find a
safe control strategy in a crowded, uncertain environment,
we propose a density-based method. Our approach uses
a neural network and the Liouville equation to learn the
density evolution for a system with an uncertain initial
state. We can plan for feasible and probably safe trajectories
by applying a gradient-based optimization procedure to
minimize the collision risk. We conduct motion planning
experiments on simulated environments and environments
generated from real-world data and outperform baseline
methods such as model predictive control and nonlinear
programming. While our method requires offline planning,
the online run time is 100 times smaller compared to model
predictive control.
The code and supplementary material can be found at
https://mit-realm.github.io/density_planner/.
I. INTRODUCTION
Ensuring safety is crucial for autonomous systems.
However, because of uncertainties such as measurement
errors, external disturbances, and model errors, we
cannot precisely predict the future state of the system or
the environment, which complicates safety certification.
To deal with these uncertainties, there are two main
directions in motion planning: Conservative approaches
focus on robust safety by considering all possible
situations via reachability analysis [1], planning a worst-
case safe trajectory [2] or computing safe sets in which
safety can be guaranteed for contained trajectories [3,
4]. However, in many situations, a guaranteed safe
trajectory may not exist. Thus, other approaches seek
trajectories that are safe with high probability [5, 6]
by assuming linear dynamics or Gaussian distributions
to propagate the uncertainties along the trajectories.
Our method follows a similar philosophy as the latter
approach but we handle nonlinear dynamics and the
propagation of arbitrary initial state uncertainties by
leveraging machine learning techniques.
*Laura Lützow was supported by a fellowship within the IFI
program of the German Academic Exchange Service (DAAD). Ford
Motor Company provided funds to assist the authors with their
research, but this article solely reflects its authors’ opinions and
conclusions and not Ford’s.
1
Department of Informatics, Technical University of Munich,
Germany
2
Department of Aeronautics and Astronautics, Massachusetts
Institute of Technology, USA
3Division of Systems Engineering, Boston University, USA
Recent work [7] solves the Liouville equation [8]
with a neural network to learn the density distri-
bution of reachable states. The follow-up work [9]
conducts perturbation-based motion planning to en-
sure probabilistic safety. However, the method in [9]
is computationally expensive and was not tested on
realistic datasets. Inspired by this work, we design a
differentiable framework to plan for feasible trajectories
which minimize the collision risk. Beyond the system
settings in [9], we also model the uncertainty from the
environment and estimate the collision probability in
an efficient manner. The proposed planning process can
be described as follows: First, we predict the evolution
of the state distribution in time for a closed-loop system
under a parameterized control policy using a neural
network. Then, we model the environment uncertainty
as a probability occupancy grid map. Finally, we use a
gradient method to optimize the policy parameters by
minimizing the collision risk.
We conduct experiments on both simulated and real-
world data [10] where our method can outperform a
standard and a tube-based model predictive control
(MPC) with a lower failure rate and lower online
runtime. Furthermore, our method can be used for
nonlinear system dynamics and uncertain initial states
following an arbitrary probability distribution and can
find plausible trajectories even when the worst-case
collision is inevitable.
Our contributions are threefold: (1) To the best of
our knowledge, we are the first to propose a density-
based, differentiable motion planning procedure for
nonlinear systems with state and environment uncer-
tainties. (2) We provide an efficient method to compute
the collision probability for closed-loop dynamics and
an effective gradient-based algorithm for probabilistic
safe trajectory planning. (3) We test our approach
on simulated and real-world testing cases where we
outperform state-of-the-art motion planning methods.
II. RELATED WORKS
Motion planning under uncertainties:
Motion plan-
ning aims to find a trajectory that minimizes some
cost function, fulfills the kinematic constraints, and
avoids collisions [11]. Environment uncertainties can be
treated as occupancy maps [12, 13, 14] where probability
navigation functions [15], chance-constrained RRT [16],
or constrained optimization techniques [17] are applied
to ensure safety. However, most of these methods
arXiv:2210.02131v2 [cs.RO] 27 Feb 2023
assume that the initial state is precisely known, but
this is often not given in the real world due to sensor
noise. Thus, it is desirable to consider the whole initial
state distribution for motion planning as it is done by
density-based planning methods.
Density-based planning:
Many density-based plan-
ning approaches assume that the initial state follows
a Gaussian distribution. Under linear dynamics, the
mean and covariance of the state distribution can be
easily propagated through time [18], and the related
optimal control problems can be solved by convex pro-
gramming [19]. There exist several publications dealing
with the covariance steering problem for nonlinear
systems [20, 21, 22], but these methods are not directly
applicable to non-Gaussian cases.
Controlling arbitrary initial distributions is closely
related to optimal transport theory [23]. The work [24,
25] finds the optimal policy by solving the Hamilton-
Jacobi-Bellman (HJB) equation. However, this method
can only be used for full-state feedback linearizable
systems and the convergence cannot be guaranteed.
In [26], a primal-dual optimization is proposed to
iteratively estimate the density and update the policy
with the HJB equation, but this approach requires a time-
consuming state discretization and cannot generalize to
high-dimensional systems.
The closest work to ours is [9], which estimates the
density for nonlinear systems and arbitrary initial distri-
butions with neural networks (NN). With a perturbation
method and nonlinear programming, a valid control
policy is searched. However, the planning algorithm
is very near-sighted, the collision-checking procedure
is still computationally expensive, and no optimality
statements can be claimed. Our approach overcomes
these drawbacks by utilizing a more efficient collision
computation method and a gradient-based optimization
algorithm to minimize the collision probability.
III. PRELIMINARIES
Density prediction:
First, we consider an au-
tonomous system
˙
x=f(x)
where
xRn
is the system
state with initial density distribution
ρ0(x)
and
ρ(x
,
t)
is the density of
x
at time
t
. According to [8],
ρ(x
,
t)
follows the Liouville equation (LE):
∂ρ
t+·(f·ρ) = 0, (1)
where
∇ · (f·ρ) = n
i
xifi(x)ρ(x,t)
. Following [26],
we know that the density at the future state
Φ(x0
,
t)
is
ρ(Φ(x0,t),t) = ρ0(x0)expZt
0· f(Φ(x0,τ))dτ
| {z }
g(x0,t)
,
(2)
where
Φ
denotes the flow map of the system. By solving
Eq.
(2)
, which can be accelerated by approximating
g(x0
,
t)
with a neural network [27], and using interpo-
lation methods, the whole density distribution
ρ(·
,
t)
at
time tcan be estimated.
Problem formulation:
In this work, we want to define
an optimal control policy that steers an autonomous
system to a goal state while minimizing the collision
probability with dynamic obstacles. The state and input
constraints must be satisfied, and the system dynamics
can be nonlinear in the states.
In contrast to standard motion planning problems, we
consider an uncertain initial state following an arbitrary
but known probability density function. During the
execution of the control policy, the state can be measured
and used for closed-loop control.
IV. DENSITY-BASED MOTION PLANNING
In this section, we explain the proposed algorithm.
First, we show how the density distribution of the closed-
loop dynamics can be predicted given a parameterized
control policy. Based on this, we provide an algorithm to
compute the collision probability. Next, we introduce the
cost function to evaluate the policy parameters. Lastly,
we describe the gradient-based optimization method.
A. Density Estimation for Controlled Systems
A controlled system
˙
x=f(x
,
u)
with policy
u=
π(x
;
p)
and parameters
p
can be treated as an au-
tonomous system with dynamics
fp(x)
, and hence, its
density can be computed with Eq. (2).
Although our approach can adapt to arbitrary control
policies, we choose
p
to parameterize a reference
trajectory that is tracked by a tracking controller. By
utilizing contraction theory [28] for the controller syn-
thesis, the tracking error between the system and the
reference trajectory can be upper-bounded and formal
convergence guarantees can be provided. In this paper,
the contraction controller is learned with the algorithm
from [29] which requires the system to be control-affine.
However, the remaining parts of the density planner
do not need this special system structure such that the
planner can be applied to arbitrary system dynamics
when replacing the controller.
Analog to [27], we train an NN to approximate
g
and
Φ
from Eq. (2) using the initial state
x0
, the prediction
time tk, and the control parameters pas inputs.
Assuming that the initial density
ρ0
at state
x0
is
known, the density ρ(Φ(x0,tk),tk)can be predicted.
B. Computation of the Collision Probability
In this paper, we assume the probabilistic predictions
for the evolution of a 2D environment are given as
occupancy maps. The environment is evenly split into
Cx×Cy
grid cells along
x
and
y
directions.
Pocc(cx
,
cy
,
tk)
denotes the probability that cell
(cx
,
cy)
is occupied by
an obstacle at time
tk
. Obstacles can be road users, lanes,
or static objects, and their respective occupancy proba-
bilities can be generated by off-the-shelf environment
摘要:

DensityPlanner:MinimizingCollisionRiskinMotionPlanningwithDynamicObstaclesusingDensity-basedReachabilityLauraLützow1,2,YueMeng2,AndresChavezArmijos3andChuchuFan2Abstract—Uncertaintyisprevalentinrobotics.Duetomeasurementnoiseandcomplexdynamics,wecannotestimatetheexactsystemandenvironmentstate.Sincec...

展开>> 收起<<
Density Planner Minimizing Collision Risk in Motion Planning with Dynamic Obstacles using Density-based Reachability Laura Lützow12 Yue Meng2 Andres Chavez Armijos3and Chuchu Fan2_2.pdf

共8页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:8 页 大小:441.33KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 8
客服
关注