
Characterizing information loss in a chaotic double
pendulum with the Information Bottleneck
Kieran A. Murphy1and Dani S. Bassett1,2,3,4,5,6,7
1Dept. of Bioengineering, School of Engineering & Applied Science,
U. of Pennsylvania, Philadelphia, PA 19104, USA
2Dept. of Electrical & Systems Engineering, School of Engineering & Applied Science,
U. of Pennsylvania, Philadelphia, PA 19104, USA
3Dept. of Neurology, Perelman School of Medicine, U. of Pennsylvania, Philadelphia, PA 19104, USA
4Dept. of Psychiatry, Perelman School of Medicine, U. of Pennsylvania, Philadelphia, PA 19104, USA
5Dept. of Physics & Astronomy, College of Arts & Sciences, U. of Pennsylvania, Philadelphia, PA 19104, USA
6The Santa Fe Institute, Santa Fe, NM 87501, USA
7To whom correspondence should be addressed: dsb@seas.upenn.edu
Abstract
A hallmark of chaotic dynamics is the loss of information with time. Although
information loss is often expressed through a connection to Lyapunov exponents—
valid in the limit of high information about the system state—this picture misses the
rich spectrum of information decay across different levels of granularity. Here we
show how machine learning presents new opportunities for the study of information
loss in chaotic dynamics, with a double pendulum serving as a model system. We
use the Information Bottleneck as a training objective for a neural network to
extract information from the state of the system that is optimally predictive of the
future state after a prescribed time horizon. We then decompose the optimally
predictive information by distributing a bottleneck to each state variable, recovering
the relative importance of the variables in determining future evolution. The
framework we develop is broadly applicable to chaotic systems and pragmatic to
apply, leveraging data and machine learning to monitor the limits of predictability
and map out the loss of information.
1 Introduction
A fundamental aspect of chaos is the loss of information over time: for any measurement of a chaotic
system with finite resolution, there is a finite time horizon beyond which the measurement bears
no predictive power [
22
,
8
,
26
,
5
,
11
,
4
]. The premise of this work is simple: to find the optimally
predictive information in a chaotic system at different levels of granularity, and to study how the
predictive power of this information erodes with the passage of time.
Information loss is intimately connected to the distortion of regions in phase space by chaotic
dynamics, and thus to Lyapunov exponents: the sum of the positive Lyapunov exponents gives the
Kolmogorov-Sinai (KS) entropy, the average rate of information loss[
5
]. However, these quantities
are valid in the limit of maximal information—where infinitesimally-separated trajectories are
discernible—and are thus somewhat removed from reality [
5
]. The
(, τ)
-entropy generalizes the KS
entropy, describing the loss of predictive power for different amounts of information about the system
state [
9
]. It is defined by way of a rate-distortion objective that minimizes the rate of information
needed to predict the system state better than a threshold value of some chosen measure of distortion.
The Information Bottleneck (IB) is a rate-distortion problem where the measure of distortion is based
on mutual information; it extracts the information from one variable that is most shared with a second
variable [
25
]. We can use the IB to find optimally predictive information from one state of a chaotic
system about a future state, and at the same time measure the loss of predictive power [
6
]. In this work
we develop a framework that uses the IB for analyzing chaotic dynamics with machine learning. We
arXiv:2210.14220v1 [cs.LG] 25 Oct 2022