Iterative Convex Optimization for Model Predictive Control
with Discrete-Time High-Order Control Barrier Functions
Shuo Liu∗1, Jun Zeng∗2, Koushil Sreenath2and Calin A. Belta1
Abstract— Safety is one of the fundamental challenges in
control theory. Recently, multi-step optimal control problems
for discrete-time dynamical systems were formulated to en-
force stability, while subject to input constraints as well as
safety-critical requirements using discrete-time control barrier
functions within a model predictive control (MPC) framework.
Existing work usually focus on the feasibility or the safety for
the optimization problem, and the majority of the existing work
restrict the discussions to relative-degree one control barrier
functions. Additionally, the real-time computation is challenging
when a large horizon is considered in the MPC problem for
relative-degree one or high-order control barrier functions. In
this paper, we propose a framework that solves the safety-
critical MPC problem in an iterative optimization, which is
applicable for any relative-degree control barrier functions. In
the proposed formulation, the nonlinear system dynamics as
well as the safety constraints modeled as discrete-time high-
order control barrier functions (DHOCBF) are linearized at
each time step. Our formulation is generally valid for any
control barrier function with an arbitrary relative-degree.
The advantages of fast computational performance with safety
guarantee are analyzed and validated with numerical results.
I. INTRODUCTION
A. Motivation
Safety-critical optimal control is a central problem in
robotics. For example, reaching a goal while avoiding ob-
stacles and minimizing energy can be formulated as a
constrained optimal control problem by using continuous-
time control barrier functions (CBFs) [1], [2]. By dividing
the timeline into small intervals, the problem is reduced to
a (possibly large) number of quadratic programs, which can
be solved at real-time speeds. However, this approach can be
too aggressive due to the lack of predicting ahead.
Model predictive control (MPC) with CBFs [3] considers
the safety problem in the discrete-time domain, and provides
a smooth control policy as it involves future state information
along a receding horizon. However, the computational time
is relatively large and increases dramatically with a larger
horizon, since the optimization itself is usually nonlinear
and non-convex. An additional issue of this nonlinear model
predictive formulation is the feasibility of the optimization.
For CBFs with relative-degree one, relaxation techniques
∗Authors contributed equally.
This work was supported in part by the NSF under grants IIS-2024606
and CMMI-1931853.
1S. Liu and C. Belta are with the department of Mechanical Engi-
neering, Boston University, Brookline, MA, 02215, USA {liushuo,
cbelta}@bu.edu.2J. Zeng and K. Sreenath are with the Uni-
versity of California, Berkeley, CA, 94720, USA {zengjunsjtu,
koushils}@berkeley.edu
Implementation code is released on https://github.com/
ShockLeo/Iterative-MPC-DHOCBF.
have been introduced in [4]. In this paper, we address
the above challenges with a proposed convex MPC with
linearized, discrete-time CBFs, under an iterative approach.
In contrast with the real-time iteration (RTI) approach in-
troduced in [5], which solves the problem through iterative
Newton steps, our approach solves the optimization problem
formulated by a convex MPC iteratively for each time step.
We show that the proposed approach can significantly reduce
the computational time, compared to the state of the art
introduced in [4], even for CBFs with high relative-degree,
without sacrificing the controller performance. The feasibllity
rate of our proposed method also outperforms that of the
baseline method in [4] for large horizon lengths.
B. Related work
1) Model Predictive Control (MPC): MPC is widely used
in modern control systems, such as controller design in
robotic manipulation and locomotion [6], [7] to obtain a
control strategy as a solution to an optimization problem.
Stability was achieved in [8] by incorporating discrete-time
control Lyapunov functions (DCLFs) into a general MPC-
based optimization problem to realize real-time control on a
robotic system with limited computational resources. More
and more recent work like [9] emphasizes safety in robot
design and deployment since it is an important criterion for
real-world tasks. Some works consider safety criteria through
the introduction of additional repelling functions [1], [10]
while some works regard obstacle avoidance as one concrete
scenario in terms of safety criteria for robots [11]–[13].
Those safety criteria are usually formulated as constraints in
optimization problems. This paper can be seen in the context
of MPC with safety constraints.
2) Continuous-Time CBFs: It has recently been shown
that to stabilize an affine control system while also satisfying
safety constraints and control limitations, CBFs can be
unified with control Lyapunov functions (CLFs) to form
a sequence of single-step optimization programs [1], [2],
[14], [15]. If the cost is quadratic, the optimizations are
quadratic programs (QP), and the solutions can be deployed
in real time [1], [16]. Adaptive, robust and stochastic ver-
sions of safety-critical control with CBFs were introduced
in [17]–[21]. For safety constraints expressed using functions
with high relative degree with respect to the dynamics of
the system, exponential CBFs [22] and high-order CBFs
(HOCBFs) [23]–[25] were proposed.
3) Discrete-Time CBFs: Discrete-time CBFs (DCBFs)
were introduced in [26] as a means to enable safety-critical
control for discrete-time systems. They were used in a
arXiv:2210.04361v3 [math.OC] 13 Jul 2023