
MultiScale MeshGraphNets
efficient than MGN.
•
Second, we modify the training distribution to use high-
accuracy labels that better capture the true dynamics of
the physical system. As opposed to simply replicating
the spatial convergence curve of traditional solvers, this
allows to make better predictions than the reference
simulator at a given resolution.
Together, these approaches are a key step forward for
learned mesh-based simulations, and improve accuracy for
highly resolved simulations at a lower computational cost.
2. MultiScale MeshGraphNets
Here we introduce MultiScale MeshGraphNets (MS-
MGN), a hierarchical version of MeshGraphNets (MGN).
As in MGN, the model uses a message passing GNN to learn
the temporal evolution of physical systems discretized on
meshes. In contrast to MGN, passes are being made on both
the graph defined by the fine input mesh, and in a coarser
mesh. This coarse mesh is introduced only with the aim
of promoting more efficient communication in latent space,
to efficiently model fast-acting or non-local dynamics. All
inputs and outputs are defined on the fine input mesh.
This architecture is inspired by both empirical findings about
message propagation in graphs and by multigrid methods
(Briggs et al.,2000;Bramble,2019). First, message prop-
agation speed in Cartesian coordinates is bounded by the
length of the mesh edges multiplied by the number of mes-
sage passing blocks. Refining the mesh aiming to obtain
greater precision decreases the lengths of the edges, which
implies a lower speed of propagation of information. This
can lead to certain effects not being modeled properly on
high-resolution meshes. Using an auxiliary coarse mesh,
we can retain high message propagation speeds even for
very fine input meshes. And second, GNNs are related to
Gauss-Seidel smoothing iterations as they can only reduce
errors locally. By solving the system at multiple resolutions,
multigrid methods demonstrate an effective way to achieve
global solutions using local updates.
MS-MGN uses the Encode-Process-Decode GNN frame-
work introduced in Sanchez-Gonzalez et al. (2020), and is
trained for next-step predictions and applied iteratively to
unroll trajectories at inference time. For training, encoding
and decoding, we closely follow the MGN architecture. In
this work, we focus on Eulerian dynamics, hence we only
need to consider mesh edges and can omit world edges. The
algorithm is described for 2D triangular meshes but it can
also be applied for e.g. hexahedral or tetrahedral meshes. In
departure from MGN, messages are passed independently
on two graphs, the coarse graph
Gl
and fine graph
Gh
. Ad-
ditionally, we define the upsampling and downsampling
Low-resolution
High-resolution
UpsampleDownsample
Figure 1.
The four update operators on MS-MGN:
D
ownsample
(left), where each node on the low-resolution mesh (orange mesh)
receives information from the high-resolution mesh triangle (blue
mesh) enclosing the node;
H
igh-resolution (bottom-middle), where
high-resolution nodes are updated by their connected neighbors;
L
ow-resolution (top-middle), where low-resolution nodes are up-
dated by connected their neighbors;
U
psample (right), where each
high-resolution node receives information from the corresponding
low-resolution nodes it updates in the Downsample update.
graphs
Gup
,
Gdown
to propagate information between lev-
els. The training loss is only placed on nodes of the fine
input graph
Gh
. Below, we describe graph construction and
message passing operators for these graphs. The four graph
operators are visualized in Figure 1, a more detailed descrip-
tion of encoding and message passing can be found in the
Appendix (A.1).
Encoder
A mesh is an undirected graph
G= (V, E)
specified by its nodes
V
and edges
E
. Let
D ⊂ R2
be
the physical domain where the problem is defined and let
Gh= (Vh, Eh)
and
Gl= (Vl, El)
denote high-resolution
and low-resolution mesh representations of
D
, respectively.
We encode the fine input graph
Gh
as in Pfaff et al. (2021),
with the same node and edge features, and identical latent
sizes of 128. The coarse graph
Gl
is encoded in a similar
fashion. However, we only encode geometric features in the
coarse graph, i.e., relative node coordinates on edges, and
a node type to distinguish between internal and boundary
nodes. The input field variables such as velocity are only
encoded into Gh.
We next construct the downsampling graph
Gdown = (Vl∪
Vh, Eh,l)
as follows: For each fine-mesh node
i∈Vh
we
find the triangle on the coarse mesh which contains this node.
Then, we create three edges
kh,l :i→j
which connect the
node
i
to each corner node
j=j(i)∈Vl
of the triangle.
1
As the nodes in this graph are already defined above in
Gh,Gl
, we only need to define the edge feature encoding.
The edge features are the relative node coordinates from
senders and receivers, which are embedded using an MLP
of the same architecture as in the MGN encoders.
1
For meshes with other element types (e.g. hexahedrons or
tetrahedrons) we can do the same, by finding the containing ele-
ment and connecting to all corner nodes.