
Based on the above considerations, we propose to use neural simulators to simulate cellular dynamics.
More specifically, we consider the scenario where both the movement and shape of the cell show
stochastic aspects and are highly dynamic, which is typically modeled in the Cellular Potts modeling
framework, proposed in [
10
]. Given their various successful applications in modeling spatiotemporal
data, we hypothesize that neural simulators are capable of faithfully emulating the ground truth
dynamics, while accelerating the simulation process. Our contributions are summarized as follows:
•
We propose a neural simulation model to simulate stochastic single-cell dynamics similar to
those generated by the Cellular Potts model;
•
We develop and evaluate autoregressive training strategies, with the aim to improve the
model’s rollout performance and its ability to capture stochastic dynamics;
•
We observe that our method has the capacity to faithfully emulate the cellular dynamics of
the Cellular Potts model, while generating simulations an order of magnitude faster.
2 Background and Related Work
2.1 Cellular Potts Model
The Cellular Potts (CP) model is a computational modeling framework for simulating cellular
dynamics and the dynamic and fluctuating morphology of cells on a lattice [
10
,
23
,
1
]. The CP model
has gained prominence due to its flexibility in modeling cell shape and movement, the interaction
between multiple cells, stochastic aspects of cell behavior, and multiscale mechanisms [25, 20, 11].
In the CP framework, the system is modeled as a Euclidean lattice
L
and Hamiltonian
H
. The
function
x:L→S
maps each lattice site
li∈L
to its state
x(li)∈S
, where
S
is the set of all
cells and materials that can be present in the system. Note that in the CP literature
x
is commonly
referred to as
σ
; we deviate from this to stick to machine learning convention. To evolve the system,
a Markov-Chain Monte Carlo sampling algorithm is used. At every iteration, a lattice site
li
is chosen
at random. Then, a proposal is made to modify
x
such that state
x(li)
is changed to
x(lj)
, where
lj
is
a site adjacent to
li
. Finally, the difference in energy
∆H
is calculated between the proposed and
current system state. If
∆H≤0
, the proposed state is accepted as the new system state; if
∆H > 0
,
it is accepted with probability e−∆H
T, with Tbeing the temperature parameter of the model.
The Hamiltonian
H
itself differs per application, but typically consists of at least contact energy and
volume preservation terms, as originally proposed in [10]:
H=X
li,lj∈N (L)
J(x(li), x(lj)) 1−δx(li),x(lj)
| {z }
contact energy
+X
c∈C
λV(V(c)−V∗(c))2
| {z }
volume preservation
+Hother,(1)
where
N(L)
is the set of all pairs of neighboring lattice sites in
L
,
J(x(li), x(lj))
is the contact
energy between cells and/or materials
x(li)
and
x(lj)
, and
δx,y
is the Kronecker delta. Furthermore,
C
is the set of all cells in the system,
V(c)
is the number of lattice sites occupied by cell
c
(from here on
referred to as the volume of cell
c
),
V∗(c)
is the target volume of cell
c
, and
λV
is a Lagrange multiplier.
Hother
can consist of many extensions and modifications of the original Hamiltonian, for example
taking into account cellular dynamics induced by forces, gradients in chemical concentrations, cell
surface area constraints, and many more biological concepts. The specific Hamiltonians used for
simulating our data can be found in Appendix A.
2.2 Neural Simulators
Neural networks have been employed for simulation in many domains [
19
,
5
], often by either
combining ML models with existing numerical solvers [
32
,
15
] or by using ML models to simulate
dynamics in their entirety [
4
,
18
,
28
]. The latter, which we refer to as neural simulators, encompass
the type of model proposed in this work, as we seek to emulate the CP simulations as a whole. Of
particular interest are autoregressive methods operating on a spatial grid, as these fit both the temporal
and spatial component of the CP simulations. This setup generally comes with challenges of ensuring
prediction quality and stability over longer rollout trajectories. Common approaches to address this
include injecting noise and incorporating model rollouts in the training procedure [28, 4].
2