
On the Forward Invariance of Neural ODEs
Wei Xiao 1Tsun-Hsuan Wang 1Ramin Hasani 1Mathias Lechner 1
Yutong Ban 1Chuang Gan 2Daniela Rus 1
Abstract
We propose a new method to ensure neural ordi-
nary differential equations (ODEs) satisfy output
specifications by using invariance set propaga-
tion. Our approach uses a class of control barrier
functions to transform output specifications into
constraints on the parameters and inputs of the
learning system. This setup allows us to achieve
output specification guarantees simply by chang-
ing the constrained parameters/inputs both dur-
ing training and inference. Moreover, we demon-
strate that our invariance set propagation through
data-controlled neural ODEs not only maintains
generalization performance but also creates an ad-
ditional degree of robustness by enabling causal
manipulation of the system’s parameters/inputs.
We test our method on a series of representation
learning tasks, including modeling physical dy-
namics and convexity portraits, as well as safe
collision avoidance for autonomous vehicles.
1. Introduction
Neural ODEs (Chen et al.,2018) are continuous deep learn-
ing models that enable a range of useful properties such as
exploiting dynamical systems as an effective learning class
(Haber & Ruthotto,2017;Gu et al.,2021), efficient time
series modeling (Rubanova et al.,2019;Lechner & Hasani,
2022), and tractable generative modeling (Grathwohl et al.,
2018;Liebenwein et al.,2021).
Neural ODEs are typically trained via empirical risk min-
imization (Rumelhart et al.,1986;Pontryagin,2018) en-
dowed with proper regularization schemes (Massaroli et al.,
2020) without much control over the behavior of the ob-
tained network and over the ability to account for coun-
1
Computer Science and Artificial Intelligence Lab, Mas-
sachusetts Institute of Technology, Cambridge, MA, USA.
2
MIT-
IBM Watson AI Lab. Videos and code are available on the
website:
https://weixy21.github.io/invariance/
.
Correspondence to: Wei Xiao <weixy@mit.edu>.
Proceedings of the
40 th
International Conference on Machine
Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright
2023 by the author(s).
Input Output
Input
Invariance
Neural ODE
𝐼(𝑡)
Invariance propagation
Output
spec
What if there is an
obstacle
in
In the flow of a neural ODE?
Neural ODE with no Invariance
Neural ODE + Invariance
AB
C
We use Neural ODEs with Invariance
Figure 1.
Invariance Propagation for neural ODEs. Output specifi-
cations can be guaranteed with invariance, including specification
satisfaction between samplings, e.g., spiral curve regression with
critical region avoidance.
terfactual inputs (Vorbach et al.,2021). For example, a
well-trained neural ODE instance that learned to chase a
spiral dynamic (Fig. 1B), would not be able to avoid an
object on its flow, even if it has seen this type of output
specification/constraint during training. This shortcoming
demands a fundamental fix to ensure the safe operation of
these models specifically in safety-critical applications such
as robust and trustworthy policy learning, safe robot control,
and system verification (Lechner et al.,2020;Kim et al.,
2021;Hasani et al.,2022).
In this paper, we set out to ensure neural ODEs satisfy
output specifications. To this end, we introduce the concept
of propagating invariance sets. An invariance set is a form
of specification consisting of physical laws, mathematical
expressions, safety constraints, and other prior knowledge of
the structure of the learning task. We can ensure that neural
ODEs are invariant to noise and affine transformations such
as rotating, translating, or scaling an input, as well as to
other uncertainties in training and inference.
To propagate invariance sets through neural ODEs we can
use Lyapunov-based methods with forward invariance prop-
erties such as a class of control barrier functions (CBFs)
(Ames et al.,2017), to formally guarantee that the output
1
arXiv:2210.04763v2 [cs.LG] 31 May 2023