Reference Governor for Input-Constrained MPC to Enforce State Constraints at Lower

2025-04-29 0 0 761.24KB 18 页 10玖币
侵权投诉
Reference Governor for
Input-Constrained MPC to Enforce
State Constraints at Lower
Computational Cost
Miguel Castroviejo-Fernandez1, Jordan Leung1and Ilya Kolmanovsky1
October 21, 2022
Abstract
In this paper, a control scheme is developed based on an input con-
strained Model Predictive Controller (MPC) and the idea of modifying the
reference command to enforce constraints, usual of Reference Governors
(RG). The proposed scheme, referred to as the RGMPC, requires opti-
mization for MPC with input constraints for which fast algorithms exist,
and can handle (possibly nonlinear) state and input constraints. Condi-
tions are given that ensure recursive feasibility of the RGMPC scheme and
finite-time convergence of the modified command to the the desired ref-
erence command. Simulation results for a spacecraft rendezvous maneu-
ver with linear and nonlinear constraints demonstrate that the RGMPC
scheme has lower average computational time as compared to state and
input constrained MPC with similar performance.
1 introduction
Model Predictive Control (MPC) is informed by optimization of a state and
input dependent cost function. At each time step, the input sequence that
minimizes this cost subject to constraints on the inputs and/or the states [1] is
computed and the input is set to the first element of the sequence. While MPC
has emerged as an effective control strategy for constrained systems and is used
in many applications, one of its primary drawbacks is the high computational
cost associated with solving the optimization problem at each time step. This
computational cost can be significantly lowered in the case of short horizon
Linear Quadratic MPC (LQ MPC) with only input constraints by exploiting
the underlying structure of the cost to speed up gradient computations as in the
1University of Michigan, Ann Arbor, MI 48109 USA mcastrov, jmleung,
ilya@umich.edu. This research is supported by Air Force Office of Scientific Research Grant
number FA9550-20-1-0385.
arXiv:2210.10995v1 [eess.SY] 20 Oct 2022
Fast MPC algorithm of [2] or by employing accelerated primal projected gradient
methods [3]. In addition, it is easier to enforce anytime feasibility properties
[4] for input constrained MPC (e.g., by saturating the computed input in the
case of boxed constraints), analyze the impact of inexact implementation [5, 6],
certify an inexact solution [7] and exploit the regularity properties as compared
to the state constrained case. For example, [8] performs the analysis of an
inexact implementation of state and input constrained MPC. Finally, to handle
nonlinear constraints the use of more computationally expensive nonlinear MPC
is required.
To capitalize on advantages of short-horizon input constrained MPC (uMPC)
with polytopic input constraints yet be able to handle state constraints and (pos-
sibly nonlinear) input constraints, in this paper we consider the augmentation
of uMPC with a reference governor (RG). RGs [9] are add-on schemes that en-
sure, at each time step, selection of the reference command so that subsequent
trajectories remain feasible with respect to constraints. However, the direct ap-
plication of existing RGs to uMPC-based closed-loop systems is difficult. For
instance, if RG is based on online prediction [10, 11], a uMPC optimization
problem will need to be solved at each time step over the reference governor
prediction horizon; this will likely exceed the computational cost of a state and
input constrained MPC (cMPC).
In this paper we propose a new scheme which enables a computationally
efficient application of RGs to complement uMPC in controlling linear systems
with (possibly nonlinear) state constraints and nonlinear input constraints. This
scheme, that we refer to as RGMPC, only requires that a single uMPC opti-
mization problem be solved per time step.
For the proposed RGMPC scheme we show, under suitable assumptions,
the recursive feasibility as well as finite-time convergence of the modified ref-
erence command to the desired constant reference command, i.e. properties
expected of conventional RGs. Simulation results for a spacecraft rendezvous
(RdV) problem demonstrate low computational requirements and good closed-
loop performance being achieved with the proposed approach.
The paper is organized as follows. In Section 2 the class of systems being
addressed is discussed and the two main ingredients: uMPC and the Incremen-
tal Reference Governor (IRG) of [11], needed for subsequent developments are
reviewed. Section 3 introduces the proposed RGMPC scheme and presents the-
oretical results. Finally, numerical simulations of the proposed scheme applied
to a spacecraft RdV maneuver are reported in Section 4.
Notations: Sn
++,Sn
+denote the set of symmetric n×npositive definite
and positive semi-definite matrices respectively. Imdenotes the m×midentity
matrix. Given xRnand WSn
+, the W-norm of xis ||x||W=x>W x.
Given PSn
++, y Rn,BP(y, r) = {xRn| ||yx||Pr}and λ+(P) is
the maximum eigenvalue of P. Given aRn, b Rm,(a, b) = [a>, b>]>. The
sequence made of the αjRn, j =a, . . . , b elements is denoted by {αj}b
j=a.The
set Nis the set of positive integers and N0the set of non negative ones.
2 Preliminaries
2.1 Class of systems
We consider a class of systems represented by the following linear discrete-time
models,
xk+1 =Axk+Buk,(1a)
yk=Cxk,(1b)
where ARn×n,BRn×m,CRp×nand kN0. The system is subject to
hard constraints on both states and inputs:
zk=(xk, uk)∈ Z,k0,(2a)
Z={(x, u)|x∈ X, u ∈ U} ⊆ Rn+m,(2b)
where X Rn,U Rmare compact, convex sets with the origin in their
interiors. Furthermore, we make the following assumption:
Assumption 1 The pair (A, B)is stabilizable.
2.2 Characterization of the steady states and inputs
We consider the reference command (set-point) tracking problem of bringing
the output, state and input of the system to a specific set-point rRpand to
the associated steady states and inputs xss, uss, respectively. Using the usual
definition of a steady state and (1), the set-points must satisfy the following:
AInB0n+m×p
C0p×mIp
xss
uss
r
=M
xss
uss
r
= 0.(3)
Assumption 1 ensures that (3) has a solution [12]. In the following, we
define zss(r) = (xss(r), uss(r)), where (zss(r), r) are solutions to (3). Given the
existence of constraints, the following equation describes an inner approximation
of the set of admissible reference commands.
R=rRp|∃ z˜
Z, M z
r= 0.
where ˜
Z ⊂ Int Zis a compact and convex set. This, under Assumption 1,
implies that Ris compact and convex.
2.3 Input constrained MPC
As explained in the introduction, uMPC offers several advantages as compared
to state and input constrained MPC (cMPC). In the following we consider short-
horizon uMPC with a quadratic cost function,
J(ξ, µ, v) =
NMPC 1
X
i=0 ||ξixss(v)||2
Q+||µiuss(v)||2
R
+||ξNMPC xss(v)||2
P,
where ξ={ξi}NMPC
i=0 , µ ={µi}NMPC 1
i=0 ,QRn×n, R Rm×m, P Rn×nand
NMPC N. The MPC law is defined using the solution to the following Optimal
Control Problem (OCP) P r(x, v, NMPC):
min
ξ,µ
J(ξ, µ, v) (4a)
s.t. ξ0=x(4b)
ξi+1 =i+Bµi, i = 0, . . . , NMPC 1,(4c)
µi∈ U, i = 0, . . . , NMPC 1.(4d)
We assume that
Assumption 2 QSn
++, R Sm
++, P Sn
++ and P=Q+A>P A
(A>P B)(R+B>P B)1(B>P A), i.e. Pis the solution to the Discrete Algebraic
Riccati Equation (DARE).
Finally, let
{u
j(x, v, NMPC)}NMPC 1
j=0 (5)
denote the solution to P r(x, v, NMPC). Then, at time instant kthe MPC com-
puted input is given by uk=u
0(xk, vk, NMPC). Assumption 1 and QSn
++
ensure the existence of a stabilizing solution to the DARE in Assumption 2,
and since 0 Int Uthe MPC control law is locally stabilizing at strictly con-
straint admissible equilibria [13]. Note that MPC described in this section does
not handle state constraints which will be handled by the IRG.
2.4 Incremental Reference Governor (IRG)
For the time being, suppose that a control law for system (1),
u=g(x, r),(6)
which depends on the state xand reference command r, is available. We define
ug
j(x, r) = g(xg
j(x, r), r), j N0and xg
j(x, r) = Ajx+Pj1
i=0 Aj1iBug
ifor j
1 and xg
0(x, r) = x. The corresponding state-input vector is zg
j(x, r) = (xg
j, ug
j).
Now, considering (1) in closed-loop with controller (6), the aim of the IRG
is to adjust the reference command that the system follows in such a way as
to ensure that constraints are enforced. The IRG accomplishes this by testing
whether an increment of the current reference command leads to constraint
admissible trajectories.
摘要:

ReferenceGovernorforInput-ConstrainedMPCtoEnforceStateConstraintsatLowerComputationalCostMiguelCastroviejo-Fernandez1,JordanLeung1andIlyaKolmanovsky1*October21,2022AbstractInthispaper,acontrolschemeisdevelopedbasedonaninputcon-strainedModelPredictiveController(MPC)andtheideaofmodifyingthereferenceco...

展开>> 收起<<
Reference Governor for Input-Constrained MPC to Enforce State Constraints at Lower.pdf

共18页,预览4页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:18 页 大小:761.24KB 格式:PDF 时间:2025-04-29

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 18
客服
关注