Large population limits of Markov processes on random networks Marvin L ucke1 Jobst Heitzig3 P eter Koltai2 Nora Molkenthin4 and Stefanie

2025-05-03 0 0 998.96KB 34 页 10玖币
侵权投诉
Large population limits of Markov processes
on random networks
Marvin L¨ucke1, Jobst Heitzig3, P´eter Koltai2, Nora Molkenthin4, and Stefanie
Winkelmann1
1Zuse Institute Berlin
2Department of Mathematics, Free University of Berlin
3FutureLab on Game Theory and Networks of Interacting Agents, Potsdam Institute for Climate Impact
Research
4Complexity Science Department, Potsdam Institute for Climate Impact Research
Abstract
We consider time-continuous Markovian discrete-state dynamics on random networks of
interacting agents and study the large population limit. The dynamics are projected onto
low-dimensional collective variables given by the shares of each discrete state in the system,
or in certain subsystems, and general conditions for the convergence of the collective variable
dynamics to a mean-field ordinary differential equation are proved. We discuss the convergence
to this mean-field limit for a continuous-time noisy version of the so-called “voter model” on
Erd˝os–R´enyi random graphs, on the stochastic block model, and on random regular graphs.
Moreover, a heterogeneous population of agents is studied.
1. Introduction
Large networks, where the nodes represent agents and the edges indicate some form of interaction
between them, are used to model numerous (social) phenomena [1,2], e.g., the spreading of a disease
[3] or the diffusion of a certain (political) opinion within a society [4], just to name a few. In such
a framework each node has a state that changes over time, in dependence upon the states of other
nodes. Stochastic effects are often included in the model in order to take account of uncertainty in
the dynamics and variability of the agents’ behavior [2]. Even for simple interaction rules dictating
the evolution of an agent’s state, the emergent macroscopic system behavior remains difficult to
examine analytically. Hence, one usually resorts to numerical simulations in order to analyze such
systems [5]. As the computational effort for running simulations typically increases linearly or
faster with the size of the network, simulating large populations of agents is often cumbersome
or even infeasible. Thus, while on the one hand modeling each agent’s behavior simultaneously
carries the hope of capturing effects that are elusive to less detailed models, on the other hand one
is interested in reduced-order models that retain the central phenomena of the detailed model and
still allow for a good understanding and computational feasibility. One approach to address this
issue is to find a low-dimensional representation of the system which captures the dynamics in the
large population limit.
One of the classical results on this is by Kurtz [6], who studied continuous-time Markov chains
in the context of chemical reaction networks and showed the convergence of the concentrations to
the solution of an ordinary differential equation (ODE) as the system size tends to infinity. These
results may directly be translated to the Markovian dynamics of interacting agents on an all-to-
all coupled network, i.e., on a complete graph [7]. Several generalizations of this type of result
have emerged, both by employing different dynamics on the network and by considering different
interaction networks affecting the agents’ dynamics. For an all-to-all coupled network, by allowing
the states of the agents to evolve as stochastic differential equations (SDEs), one arrives at what
for interacting particle systems is called a McKean–Vlasov equation [8]. The mean-field limit of
1
arXiv:2210.02934v2 [math.PR] 7 Sep 2023
non-Markovian dynamics has been considered e.g. in [9,10]. As for other interaction structures,
one has considered instead of a complete graph also (infinite) lattices [11], co-evolving graphs [12],
and in particular random graphs. In the present work, random interaction graphs in conjunction
with continuous-time discrete-state random processes are considered.
A recent branch of literature utilizes the concept of graphons [13,14] (or graph limits) to derive
a large population limit of the dynamics on certain random graphs. For deterministic dynamics,
graphon theory is utilized, e.g., in [15] for the analysis of reaching consensus, in [16] for graphs with
changing edge weights, and in [12] for Kuramoto-type models with adaptive network dynamics.
As for stochastic dynamics, there exist, e.g., works on the stochastic Kuramoto model on Erd˝os–
R´enyi and regular random graphs [17] and on particle systems with randomly changing interaction
graphs [18]. In [19] the probability distribution of the discrete state of each single node is examined,
which in the large population limit yields a description of the process in terms of a partial integro-
differential equation. This approach is quite versatile, as the derived limit equation is able to
represent a wide range of dynamics. However, this versatility comes with the price of rather
high complexity, mathematical intricacies, and also not all random graph models can be modeled
by a graphon. We also note that while [19] considers a homogeneous population, where each
agent evolves according to analogous (stochastic) rules, our setup will allow for considering a
heterogeneous population.
A mean-field limit in the form of a PDE can also be obtained by considering the continuity
equation (or transport equation), which is discussed in [16,20] for deterministic consensus dynamics
with time-varying edge weights. Instead of considering the probability distribution of each node’s
state, other works describe the large population limit via distributions over other observables, e.g.,
the number of edges between nodes of certain states [21].
Furthermore, for discrete-state dynamics there are works that exploit symmetries in the network
to show that some system states of the global Markov process can be lumped together, in order
to obtain a process with fewer states [22,23]. While this approach could potentially yield a lower
dimensional representation of the process, the finding of (approximately) lumpable states still poses
a considerable problem [24,25].
Another common approach to obtain mean-field type approximations is via so-called “moment-
closure” methods [26,5]. Here, equations for the evolution of the frequency of network motifs
are derived, e.g., the frequency of single nodes in a certain state, the frequency of linked pairs
(neighbors) in certain states, or the frequency of triangles in certain states. These equations are
often hierarchical, i.e., the equation for single nodes contains the frequency for pairs, the equation
for pairs contains the frequency of triplets, and so on. Hence, a closure of the equations has to be
performed to eliminate dependency on higher-order motifs. (Commonly, the equations for pairs
are closed, which yields the well-known pair-approximation [27,28,29].) This closure introduces
an error that generally does not vanish in the large population limit, and hence, these approaches
do not guarantee an exact large population limit.
In this work, we prove large population limits in the form of an ordinary differential equation,
which we call the mean-field limit, for stochastic processes on random graphs. We consider the
case that each node has one of finitely many discrete states and the evolution of a node’s state
is given by a continuous-time Markov chain. In this setting, it is of particular interest to observe
the shares (or concentrations) of each of the discrete states in the whole system, for example the
percentage of infected agents in an epidemiological model. Hence, we consider these shares as a
projection onto the before-mentioned low-dimensional space, and will refer to them as collective
variables. However, we also consider collective variables given by finer shares that measure the
concentrations of the different states only for a subset of nodes, for instance, the shares in a certain
cluster (or community) of nodes. Our main results are:
1. Let a sequence of (random) graphs of increasing size and a dynamical model as described
above be given. We provide conditions for the choice of collective variables that guarantee
that their evolution for the original process converges in probability on finite time intervals
to a mean-field ODE (MFE) in the limit of infinitely large networks (Theorem 3.1 and
Corollary 3.4).
2. We verify these conditions for multiple random network topologies in section 4, in particular
for a voter model on Erd˝os–R´enyi random graphs, on the stochastic block model, and on
2
random regular graphs. We find that the MFE is a valid approximation for graphs of inter-
mediate density, i.e., where the average degree grows mildly with the graph size—depending
on the random graph model, logarithmically or even slower. Moreover, we discuss a voter
model with heterogeneous population, that is, the dynamical laws of agents can differ across
the network.
While common approaches using graph limits [19,16] are restricted to random graph models that
can be represented by graphons, our high-level convergence theorem (given in section 3) can in
principle be applied to arbitrary sequences of random graphs. For example, random regular graphs
exhibit stochastically dependent edges and can thus not be represented via graphons, but they can
be handled using the theory presented in this work (cf. section 4.5 and in particular Corollary 4.14).
Furthermore, we note that the bounds for the required graph density, which we derive in section 4,
are consistent with comparable findings from (diluted) graphon approaches [30].
The remaining part of this paper is structured as follows. In section 2we give a precise definition
of the model and introduce the low dimensional projection (collective variable) we consider. In
section 3we prove that the large population limit of the projected dynamics is given by a mean-
field ODE, if certain conditions are fulfilled. We verify these conditions for several examples in
section 4and conclude this paper in section 5.
2. Prerequisites
Notation For an integer NN, we define [N] := {1, . . . , N}. Furthermore, bold symbols always
refer to random variables. The symbol p
denotes convergence of random variables in probability.
The symbol ∥·∥ denotes any appropriate vector norm. For two functions f, g :RR>0we denote
the asymptotic dominance of fover gby
f(x)g(x) :f(x) = ω(g(x)) :lim
x→∞
f(x)
g(x)=(1)
and asymptotic dominance of gover fby
f(x)g(x) :f(x) = o(g(x)) :lim
x→∞
f(x)
g(x)= 0.(2)
The model We consider a simple undirected graph G= (V, E) of size |V|=N. Without loss of
generality, we set V= [N]. Additionally, each node i[N] has one of MNdiscrete states, i.e.,
xi[M]. We denote the complete system state as x= (x1, . . . , xN)[M]N.
Each node ichanges its state over time due to a continuous-time Markov chain with transition
rate matrix QG
i(x), QG
i: [M]NRM×M.For m̸=n, the (m, n)-th entry of the rate matrix,
(QG
i(x))m,n 0, specifies at which rate node itransitions from state mto state n. The diagonal
entries are such that each row sums to 0. Note that the transition rates QG
i(x) may depend on
the full system state xand on the graph G. Moreover, each node imay be subject to a different
function QG
idetermining the transition rates. We denote the stochastic process referring to the
state of node iat time tas xi(t), and the full process as x(t) = x1(t),...,xN(t).The generator
QGof the full process x(t) is given by
QG(x, y) = (QG
i(x)xi,yii[N]j̸=i:xj=yj
0 else (3)
and specifies the rate of transitioning to a state y[M]Nwhen starting in a different state x
[M]N. Note that this process has the following interpretation: Given x(t) = x, the processes xi(s),
i= 1, . . . , N, are independent Markov jump processes with rate matrix QG
i(x) for s>tas long as no
jump takes place, i.e., x(r) = xfor all t<r<s. When a jump occurs (almost surely no two jumps
occur simultaneously), the state xis re-set, with it the rate matrices QG
i(x), and the independent
processes in the nodes start over with initial conditions dictated by x. This interpretation gives
rise to the common Gillespie algorithm [31] for generating individual realizations of the process:
draw the duration until the next event from the exponential distribution corresponding to the
accumulated rate of events, then draw the actual event with probabilities proportional to the
individual events’ rates.
3
Example 2.1. Common examples of the model introduced above are the following:
1. SI-model: In epidemiology, the SI-model [3] describes the spreading of a disease on a net-
work. The state of a node is either susceptible (S) or infectious (I). Let dG
i,I (x)N0denote
the number of infectious nodes in the neighborhood of node iwhen in system state x. Then
the rate of the susceptible node ibecoming infectious is proportional to dG
i,I (x), and infectious
nodes become susceptible again at a constant rate, i.e.,
QG
i(x) = α dG
i,I (x)α dG
i,I (x)
ββ(4)
with parameters α, β > 0.
2. Majority rule model: In the majority rule model of opinion dynamics [32,5] a node iwith
state mcan only adopt a different state nif a majority of its neighbors have that state n. Let
the indicator variable δG
i,n(x)∈ {0,1}denote whether more than 50% of agent i’s neighbors
have state n. Then
QG
i(x)m,n =α δG
i,n(x) (5)
with parameter α > 0. We remark that a deterministic discrete time majority rule model on
random graphs is considered in [33], where a fast convergence to consensus is shown.
3. Voter model: In the so-called “voter model”, originally introduced in [34], the state of a node
refers to its opinion about some issue and changes stochastically based on the neighborhood
of the node. There are many variations of the voter model with slightly different dynamical
rules or other modifications, see e.g. [35,36,37,38,28,29]. In the model that we consider
later, a node’s rate to change to some state nother than its current state scales linearly with
the fraction of nodes in its neighborhood that have state n. We discuss the model in detail in
section 4.
Random graphs In the following, we also examine Markov jump processes on random graphs. We
define a random graph Gon Nnodes as a random variable with values in the set of all 2N(N1)/2
many simple graphs on nodes V= [N]. In this setting, the stochastic process x(t) depends on
both the selection of a random graph according to Gand the stochastic transitions of the Markov
jump process. More precisely, a realization of x(t) is given by first sampling a graph Gfrom G,
initializing the node states, and then letting the Markov jump process run on G.
Classes and collective variables The main result of this work provides conditions under which
a low-dimensional representation of the dynamics described above converges to a mean-field ODE
in the large population limit. The map which projects the system state xonto this reduced space
is called a collective variable. We consider a special type of collective variables that measure the
concentration of each discrete node state within certain subsets of the nodes. We define these
subsets as follows. We assign each node to one of KNclasses and denote the classification of
node ias si[K]. Note that the class siof node iis fixed and does not depend on the realization
of the random graph Gor on time. We call the tuple (xi, si) the extended state of node i. Finally,
the collective variable C: [M]NRM K is defined by measuring the shares of each extended state
in each of these subsets, i.e.,
C(x) = C(m,k)(x)m[M],k[K], C(m,k)(x) := #{i[N]:(xi, si)=(m, k)}
N.(6)
It is well-known that mean-field approximations, i.e., expressing the dynamics only in terms of
concentrations, work best if nodes (or particles in physical literature) are at least somewhat in-
distinguishable and interchangeable [5,16]. Hence, it is reasonable in our setting to group nodes
together in classes if they have similar traits and thus may become indistinguishable in the large
population limit. We provide some examples below.
Example 2.2. Common examples for the choice of classes:
4
cluster 1 cluster 2
C(1,1) =2
20
C(2,1) =8
20
C(1,2) =6
20
C(2,2) =4
20
Figure 1: Example graph Gof size N= 20, sampled from a stochastic block model with two
clusters (see section 4.4 for more details). Each node has one of two states, state 1 is
indicated by blue color and state 2 by green color. We assign a node to class kif it is
located in cluster k,k= 1,2. The collective variable C(x) measures the shares of the
two states in each cluster.
1. In the case K= 1, the extended states are the states itself, (xi, si)=(xi,1)
=xi. The
collective variable C(x)measures the share of each state in the system. These collective
variables are commonly discussed in mean-field literature, e.g. [39]. A necessary and sufficient
condition on the random graph sequence for the mean field limit to hold has been derived in
[40] for processes where the transition rates Qiare affine-linear in the so-called neighborhood
vector (an example would be given by equation (4)). We note that the continuous-time noisy
voter model that we consider later (cf. section 4.1) does not fall in this category of processes
and neither do so-called “complex contagion” models in which the infection rates are nonlinear
functions of the infection prevalence in the node’s neighborhood.
2. If the random graph exhibits a fixed modular structure with Kcommunities or clusters, it is a
natural choice to measure the shares of the states in each cluster separately, i.e., a node located
in cluster kis assigned to class k. Hence, the extended state (xi, si)=(m, k)refers to a node
with state xi=mlocated in cluster k. We provide an example in Figure 1. Differentiating
nodes by their community is frequently used in the literature, see e.g. [38,41,42]. The
well-known stochastic block model, which we discuss in more detail in section 4.4, generates
random graphs that exhibit this clustered structure.
3. If nodes differ by their transition rate matrices QG
i, i.e., the population is heterogeneous, it
is reasonable to assign them to different classes. As an example, there could be very active
nodes (large transition rates) and rather inactive nodes (small or zero transition rates, often
referred to as “zealots” in literature [43,38,44]). We discuss the case of a heterogeneous
population in section 4.3.
4. We may want to differentiate between nodes by their position on the graph. For instance, we
could construct classes based on node degrees or centrality measures. An “influencer” class
could consist of nodes with high node degrees, while the “follower” class has lower degrees.
Note that in this case the class siof node idepends on the realization of the random graph
G, contrary to our previous assumption. We can remedy this issue by defining a function s
such that si=s(i, G)and adapting the collective variable accordingly, i.e.,
CG
(m,k)(x) := #{i[N]:(xi, s(i, G)) = (m, k)}
N.(7)
The main theorem for convergence in the large population limit, which we will discuss in
section 3, also applies for this slight extension of the class framework. However, as the
examples that we discuss later in section 4all employ fixed classes si, we will not consider
this extension further in this work.
5
摘要:

LargepopulationlimitsofMarkovprocessesonrandomnetworksMarvinL¨ucke1,JobstHeitzig3,P´eterKoltai2,NoraMolkenthin4,andStefanieWinkelmann11ZuseInstituteBerlin2DepartmentofMathematics,FreeUniversityofBerlin3FutureLabonGameTheoryandNetworksofInteractingAgents,PotsdamInstituteforClimateImpactResearch4Compl...

展开>> 收起<<
Large population limits of Markov processes on random networks Marvin L ucke1 Jobst Heitzig3 P eter Koltai2 Nora Molkenthin4 and Stefanie.pdf

共34页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:34 页 大小:998.96KB 格式:PDF 时间:2025-05-03

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 34
客服
关注