Convolutional Neural Networks on Manifolds:
From Graphs and Back
Zhiyang Wang Luana Ruiz Alejandro Ribeiro
Abstract—Geometric deep learning has gained much attention
in recent years due to more available data acquired from non-
Euclidean domains. Some examples include point clouds for
3D models and wireless sensor networks in communications.
Graphs are common models to connect these discrete data points
and capture the underlying geometric structure. With the large
amount of these geometric data, graphs with arbitrarily large size
tend to converge to a limit model – the manifold. Deep neural
network architectures have been proved as a powerful technique
to solve problems based on these data residing on the manifold.
In this paper, we propose a manifold neural network (MNN)
composed of a bank of manifold convolutional filters and point-
wise nonlinearities. We define a manifold convolution operation
which is consistent with the discrete graph convolution by
discretizing in both space and time domains. To sum up, we focus
on the manifold model as the limit of large graphs and construct
MNNs, while we can still bring back graph neural networks by
the discretization of MNNs. We carry out experiments based on
point-cloud dataset to showcase the performance of our proposed
MNNs.
Index Terms—Manifold convolution, manifold neural networks,
geometric deep learning
I. INTRODUCTION
Convolutional neural networks (CNNs) have achieved impres-
sive success in a wide range of applications, including but not
limited to natural language processing [1], image denoising [2]
and video analysis [3]. Convolution operations are implemented
to capture the local information and features based on the
characteristics of the dataset. The remarkable success provides
the support that CNNs are recognized as powerful techniques
when processing traditional signals such as sound, image or
video, which all lie in the Euclidean domains. As we have
more access to larger scale data and stronger computing power,
increasing attention is being paid to processing data lying in
the non-Euclidean domains.
Many practical problems rely on non-Euclidean data. There is
the case, for example, detection and recommendation in social
networks [4], resource allocations over wireless networks [5],
point clouds for shape segmentation [6]. There have been works
that extend the CNN architecture to non-Euclidean domains
[7]–[9], which reproduce the success of CNNs in Euclidean
domains. Among these models, graphs are commonly used
to construct the underlying data structure, while the graph
size scales with the amount of data. In this work, we aim to
construct CNNs on this more general model – the manifold.
Supported by NSF CCF 1717120, Theorinet Simons and ARL DCIST
CRA under Grant W911NF-17-2-0181. Zhiyang and Alejandro are with
Department of Electrical and Systems Engineering, University of Pennsylvania,
Philadelphia, Pennsylvania, USA. Luana is with Simons-Berkeley Institute,
California, USA.
Graphs with well-defined limits are shown to converge to a
manifold model [10], [11], which makes the manifold capable
of capturing properties for a series of graphs. The convolution
operation is not taken for granted in non-Euclidean domains
due to the lack of global parametrization and shift invariance.
We define a manifold convolution operation based on the heat
diffusion process controlled by the Laplace-Beltrami operator.
We construct a manifold convolutional filter to process manifold
signals. By cascading the layers consisting of manifold filter
banks and nonlinearities, we can define the manifold neural
networks (MNNs) as a deep learning framework on the
manifold. To motivate the practical implementations of our
proposed MNNs, we first discretize the MNN in the space
domain by sampling points on the manifold. The proposed
MNN can be transferred to this discretized manifold as a
discretized MNN which converges to the underlying MNN
when the manifold signal is bandlimited. We further carry
out discretization in the time domain by sampling the filter
impluse function in discrete and finite time steps. In this way,
we can not only execute our proposed MNNs, but also recover
the graph convolutions and graph neural networks [7]. This
concludes our thought starting from a graph sequence to the
limit as a a manifold and back to the graphs. We finally verify
the performance of our proposed MNN with a point cloud
based model classification problem.
Related works include neural networks built on graphons
[12], [13], which are limits of a sequence of dense graphs.
Different from manifolds, graphons only represent the limits
for graphs with unbounded degrees [14]. Stability of MNNs
have been studied considering the perturbations to the Laplace-
Beltrami operator [9], [15]. A general framework for algebraic
neural networks has been proposed for architectures unified
with commutative algebras [16].
The rest of the paper is organized as follows. We start
with some preliminary concepts and define the manifold
convolutions in Section II. We construct the MNNs based
on manifold filters in Section III. In Section IV, we implement
the discretization in space and time domains to make the MNNs
realizable which also bring back to graph convolutions. Our
proposed MNN is verified in a model classification problem
in Section V. The conclusions are presented in Section VI.
II. MANIFOLD CONVOLUTION
A. Preliminary Definitions
In this paper, we consider a compact, smooth, and differen-
tiable
d
-dimensional submanifold
M
embedded in
RN
. The
embedding induces a Riemannian structure [17] on
M
which
arXiv:2210.00376v1 [eess.SP] 1 Oct 2022