
non-Markovian dynamics has been considered e.g. in [9,10]. As for other interaction structures,
one has considered instead of a complete graph also (infinite) lattices [11], co-evolving graphs [12],
and in particular random graphs. In the present work, random interaction graphs in conjunction
with continuous-time discrete-state random processes are considered.
A recent branch of literature utilizes the concept of graphons [13,14] (or graph limits) to derive
a large population limit of the dynamics on certain random graphs. For deterministic dynamics,
graphon theory is utilized, e.g., in [15] for the analysis of reaching consensus, in [16] for graphs with
changing edge weights, and in [12] for Kuramoto-type models with adaptive network dynamics.
As for stochastic dynamics, there exist, e.g., works on the stochastic Kuramoto model on Erd˝os–
R´enyi and regular random graphs [17] and on particle systems with randomly changing interaction
graphs [18]. In [19] the probability distribution of the discrete state of each single node is examined,
which in the large population limit yields a description of the process in terms of a partial integro-
differential equation. This approach is quite versatile, as the derived limit equation is able to
represent a wide range of dynamics. However, this versatility comes with the price of rather
high complexity, mathematical intricacies, and also not all random graph models can be modeled
by a graphon. We also note that while [19] considers a homogeneous population, where each
agent evolves according to analogous (stochastic) rules, our setup will allow for considering a
heterogeneous population.
A mean-field limit in the form of a PDE can also be obtained by considering the continuity
equation (or transport equation), which is discussed in [16,20] for deterministic consensus dynamics
with time-varying edge weights. Instead of considering the probability distribution of each node’s
state, other works describe the large population limit via distributions over other observables, e.g.,
the number of edges between nodes of certain states [21].
Furthermore, for discrete-state dynamics there are works that exploit symmetries in the network
to show that some system states of the global Markov process can be lumped together, in order
to obtain a process with fewer states [22,23]. While this approach could potentially yield a lower
dimensional representation of the process, the finding of (approximately) lumpable states still poses
a considerable problem [24,25].
Another common approach to obtain mean-field type approximations is via so-called “moment-
closure” methods [26,5]. Here, equations for the evolution of the frequency of network motifs
are derived, e.g., the frequency of single nodes in a certain state, the frequency of linked pairs
(neighbors) in certain states, or the frequency of triangles in certain states. These equations are
often hierarchical, i.e., the equation for single nodes contains the frequency for pairs, the equation
for pairs contains the frequency of triplets, and so on. Hence, a closure of the equations has to be
performed to eliminate dependency on higher-order motifs. (Commonly, the equations for pairs
are closed, which yields the well-known pair-approximation [27,28,29].) This closure introduces
an error that generally does not vanish in the large population limit, and hence, these approaches
do not guarantee an exact large population limit.
In this work, we prove large population limits in the form of an ordinary differential equation,
which we call the mean-field limit, for stochastic processes on random graphs. We consider the
case that each node has one of finitely many discrete states and the evolution of a node’s state
is given by a continuous-time Markov chain. In this setting, it is of particular interest to observe
the shares (or concentrations) of each of the discrete states in the whole system, for example the
percentage of infected agents in an epidemiological model. Hence, we consider these shares as a
projection onto the before-mentioned low-dimensional space, and will refer to them as collective
variables. However, we also consider collective variables given by finer shares that measure the
concentrations of the different states only for a subset of nodes, for instance, the shares in a certain
cluster (or community) of nodes. Our main results are:
1. Let a sequence of (random) graphs of increasing size and a dynamical model as described
above be given. We provide conditions for the choice of collective variables that guarantee
that their evolution for the original process converges in probability on finite time intervals
to a mean-field ODE (MFE) in the limit of infinitely large networks (Theorem 3.1 and
Corollary 3.4).
2. We verify these conditions for multiple random network topologies in section 4, in particular
for a voter model on Erd˝os–R´enyi random graphs, on the stochastic block model, and on
2