
text of adaptive control, in Bin et al. (2019) where discrete-
time adaptation algorithms are used in the context of
multivariable linear systems, and in Forte et al. (2016), Bin
and Marconi (2019), Bin et al. (2020), where adaptation
of a nonlinear internal model is approached as a system
identification problem.
Learning dynamics models is also an active research
topic. In particular, Gaussian Processes (GPs) are in-
creasingly used to estimate unknown dynamics (Kocijan
(2016), Buisson-Fenet et al. (2020)). Unlike other nonpara-
metric models, GPs represent an attractive tool in learning
dynamics due to their flexibility in modeling nonlinearities
and the possibility to incorporate prior knowledge (Ras-
mussen (2003)). Moreover, since GPs allow for analytical
formulations, theoretical guarantees on the a posteriori can
be drawn directly from the collected data (Umlauft and
Hirche (2020), Lederer et al. (2019)). Recently, GP models
spread inside the field of nonlinear optimal control (Sforni
et al. (2021)), with several applications to the particular
case of Model Predictive Control (MPC) (Torrente et al.
(2021), Kabzan et al. (2019)), and inside the field of
nonlinear observers (Buisson-Fenet et al. (2021)).
Contributions In this paper, we propose a data-driven
adaptive output regulation scheme, built on top of the re-
cently published works Bin et al. (2020) and Gentilini et al.
(2022), in which the problem of approximate regulation is
solved by means of a regulator embedding an adaptive
internal model. Unlike previous approaches, here the high
flexibility of Gaussian process priors (Rasmussen (2003))
is used to adapt an internal model unit in a discrete-time
system identification fashion, enabling the possibility to
handle a possibly infinite class of input signals needed to
ensure zero regulation error (the so-called friend, Isidori
and Byrnes (1990)). Compared to Bin et al. (2020), where
the identifier is related to a particular choice of class of
functions, to which the friend may (or may not) belongs,
the proposed approach aims to perform probabilistic infer-
ence in a possibly infinite-dimensional space. Unlike Gen-
tilini et al. (2022), the proposed regulator relies on non-
high-gain stabilising actions and Luenberger-like internal
models that lead to a fixed choice of the model order.
The latter property, jointly with the black-box nature of
Gaussian process methods, makes the proposed approach
suitable for those applications where the exosystem dy-
namics is highly uncertain and the friend structure is not
a priori known. Theoretical performance bounds on the
attained regulation error are analytically established.
The paper unfolds as follows. In Section 2 we briefly
describe the problem at hand along with the standing
assumptions over the presented results build. Section 2.1
reviews the most recent advancements in the output regu-
lation field, and introduces the barebone regulator adapted
for this work, while Section 2.2 introduces the basics of
Gaussian process inference. In Section 3 we present the
proposed regulator and state the main result of the paper.
Finally, in Section 4 a numerical example is presented.
2. PROBLEM SET-UP & PRELIMINARIES
In this section, we first detail the subclass of problems
that this work focuses on, along with the constructive as-
sumptions. Then, a Luenberger-like internal model design
technique is reviewed, together with the adaptive regulator
of Bin et al. (2020). Finally, basic concepts behind the
notion of Gaussian process regression are introduced.
2.1 Approximate Nonlinear Regulation
In this paper, we focus on a subclass of the general
regulation problem presented in Section 1, by considering
systems of the form
˙z=f0(w, z, e),
˙e=Ae +B(q(w, z, e) + b(w, z, e)u),
y=Ce,
(3)
in which z∈Rnztogether with the error dynamics e∈Rne
represent the overall state of the plant. The quantities u∈
Rnyand y∈Rnyare the control input and the measured
output respectively, while w∈Rnwis an exogenous input,
f0:Rnw×Rnz×Rne7→ Rnz,q:Rnw×Rnz×Rne7→ Rny,
b:Rnw×Rnz×Rne7→ Rny×nyare continuous functions,
and A,B, and Care defined as
A=0(r−1)ny×nyI(r−1)ny
0ny×ny0ny×(r−1)ny, B =0(r−1)ny×ny
Iny,
C=Iny0ny×(r−1)ny,
for some r∈N, consisting in a chain of rintegrators of
dimension ny. The aforementioned framework embraces a
large number of use-cases addressed in literature. In par-
ticular, all systems presenting a well-defined vector relative
degree and admitting a canonical normal form, or that are
strongly invertible and feedback linearisable fit inside the
proposed framework. Nevertheless, this approach limits to
systems having an equal number of inputs and controlled
outputs (ny). The results presented in the next sections are
grounded over the following set of standing assumptions.
Assumption 1. The function f0is locally Lipschitz and the
functions qand bare C1functions, with local Lipschitz
derivative.
Assumption 2. There exists a C1map π:P ⊂ Rnw7→
Rnz, with Pan open neighborhood of W, satisfying
L(w)
s(w)π(w) = f0(w, π (w),0) ,
with L(w)
s(w)π(w) = ∂π(w)
∂w s(w), such that the system
˙w=s(w),˙z=f0(w, z, e),
is Input-to-State Stable (ISS) with respect to the input e,
relative to the compact set A={(w, z)∈ W × Rnz:z=π(w)}.
Assumption 3. There exists a known constant nonsingular
matrix b∈Rny×nysuch that the inequality
(b(w, z, e)−b)b−1
≤1−µ0,
holds for some known scalar µ0∈(0,1), and for all
(w, z, e)∈ W × Rnz×Rne.
Remark 2. Although not necessary (see Byrnes and Isidori
(2003)), Assumption 2 is a minimum-phase assumption
customary made in the literature of output regulation
(see Isidori (2017), Pavlov et al. (2006)). In particular, As-
sumption 2 is asking that the zero dynamics
˙w=s(w),˙z=f0(w, z, 0) ,
has a steady-state of the kind z=π(w), compatible with
the control objective y= 0. As a consequence, the ideal
input u?making the set B=A×{0}invariant for (3)
reads as
u?(w, π(w)) = −b(w, π(w),0)−1q(w, π(w),0) .