Learning of Dynamical Systems under Adversarial Attacks - Null Space
Property Perspective
Han Feng, Baturalp Yalcin, and Javad Lavaei
Abstract— We study the identification of a linear time-
invariant dynamical system affected by large-and-sparse dis-
turbances modeling adversarial attacks or faults. Under the
assumption that the states are measurable, we develop necessary
and sufficient conditions for the recovery of the system matrices
by solving a constrained lasso-type optimization problem. In
addition, we provide an upper bound on the estimation error
whenever the disturbance sequence is a combination of small
noise values and large adversarial values. Our results depend on
the null space property that has been widely used in the lasso lit-
erature, and we investigate under what conditions this property
holds for linear time-invariant dynamical systems. Lastly, we
further study the conditions for a specific probabilistic model
and support the results with numerical experiments.
I. INTRODUCTION
The identification of linear time-invariant (LTI) systems
is a classic problem in control theory that has been studied
extensively. Despite the long history of this problem and
its application in a wide range of real-world systems, the
non-asymptotic analysis of the system identification problem
has gained popularity in recent years, which targets the
sample complexity of the problem [
1
], [
2
]. With the growing
popularity of safety-critical applications, such as autonomous
driving and unmanned aerial vehicles, the design of a system
identification framework that is robust against adversarial
attacks is crucial [3].
In this paper, we consider LTI systems for which the states
are under a sequence of unknown disturbances, some of
which take small values modeling noise and the remaining
ones take strategic values due to adversarial attacks or
faults in the system. We study the lasso-based optimization
problem recently proposed in [
4
]. It can recover the exact
system dynamics uniquely whenever adversarial attacks occur
intermittently with enough time separation. In [
4
], some
adversarial attacks that do not influence the estimation
are studied, whereas this paper improves those results by
providing a necessary and sufficient condition for recovery
as well as non-asymptotic bounds on the error. Our approach
is based on defining a null space property that is analogous
to the null space property condition for the lasso problem
[5], which is required to guarantee the exact recovery.
The robustness analysis of estimators has a long history,
dating back to the seminal paper [
6
]. It is known that a
small disturbance on the estimation problem, such as the
perturbation of a data point, could lead to significant changes
in the outcome of the estimator. This has led to a major effort
This work was supported by grants from ARO, AFOSR, ONR, and NSF.
The authors are with the University of California, Berkeley. E-mail:
{han_feng, byalcin, lavaei}@berkeley.edu
on the robustification estimators via regularizers. The works
[
7
] and [
8
] have found a strong relationship between the
robust estimation and regularization in regression problems
by showing the equivalence of these two problems.
The recent papers [
9
] and [
10
] on robust estimation of
linear measurement models have considered a framework
with two types of noise: small measurement noise and
large intermittent noise. They have developed necessary and
sufficient conditions for the exact recovery when a column-
wise summable norm is used to minimize the error. We focus
on this type of norm in this paper, which will be defined as
the sum of
`2
norms of the columns of a matrix. Nevertheless,
our analysis is for the more challenging problem of system
identification where the parameters are correlated over time.
Our results indirectly provide a guideline on how to design
an effective input sequence to learn system dynamics faster.
The recent papers [
11
] and [
12
] on system identification
have studied the problem of learning a sparse and structured
state-space model, and provided bounds on the required
sample size, i.e., sample complexity bounds. However, none
of the aforementioned works are applicable to adversarial
attacks since their noise/disturbance model is Gaussian. The
more recent work [
13
] has utilized a conic relaxation, which
significantly increases the problem dimension and is not
directly applicable to dynamical systems. It estimates how
many erroneous measurements or adversarial attacks can
be handled by the estimator without causing a nonzero
estimation error. There are some other works in the literature
that provide non-asymptotic error bounds for the linear
system identification problem when the ordinary least-squares
estimation method and Kalmon-Ho algorithm are used [14],
[
15
]. However, these methods are not particularly efficient
for robust estimation whenever the data is corrupted in an
adversarial way. Membership estimators are also utilized
to show a consistent estimation of linear systems [
16
].
Unfortunately, they do not provide non-asymptotic bounds.
The work [
17
] has studied the scenario where the attack is
executed on the outputs rather than the states. Unlike the
attack on the outputs, the effect of the attack on the states
propagates over time. Lastly, some other related works on
robust estimation are resilient state estimation [
18
], [
19
] and
Byzantine fault tolerance [20], [21].
One could place the system identification problem into the
broader context of robust regression to gain some valuable
insight on robust estimation. It is well-known that least-
squares methods are not robust to outliers. The work [
22
]
has studied the identification of outliers in linear regression.
It is shown that a non-convex loss function outperforms the
arXiv:2210.01421v2 [eess.SY] 5 Oct 2022