Convolutional Neural Networks on Manifolds From Graphs and Back Zhiyang Wang Luana Ruiz Alejandro Ribeiro

2025-05-06 0 0 751.38KB 7 页 10玖币
侵权投诉
Convolutional Neural Networks on Manifolds:
From Graphs and Back
Zhiyang Wang Luana Ruiz Alejandro Ribeiro
Abstract—Geometric deep learning has gained much attention
in recent years due to more available data acquired from non-
Euclidean domains. Some examples include point clouds for
3D models and wireless sensor networks in communications.
Graphs are common models to connect these discrete data points
and capture the underlying geometric structure. With the large
amount of these geometric data, graphs with arbitrarily large size
tend to converge to a limit model – the manifold. Deep neural
network architectures have been proved as a powerful technique
to solve problems based on these data residing on the manifold.
In this paper, we propose a manifold neural network (MNN)
composed of a bank of manifold convolutional filters and point-
wise nonlinearities. We define a manifold convolution operation
which is consistent with the discrete graph convolution by
discretizing in both space and time domains. To sum up, we focus
on the manifold model as the limit of large graphs and construct
MNNs, while we can still bring back graph neural networks by
the discretization of MNNs. We carry out experiments based on
point-cloud dataset to showcase the performance of our proposed
MNNs.
Index Terms—Manifold convolution, manifold neural networks,
geometric deep learning
I. INTRODUCTION
Convolutional neural networks (CNNs) have achieved impres-
sive success in a wide range of applications, including but not
limited to natural language processing [1], image denoising [2]
and video analysis [3]. Convolution operations are implemented
to capture the local information and features based on the
characteristics of the dataset. The remarkable success provides
the support that CNNs are recognized as powerful techniques
when processing traditional signals such as sound, image or
video, which all lie in the Euclidean domains. As we have
more access to larger scale data and stronger computing power,
increasing attention is being paid to processing data lying in
the non-Euclidean domains.
Many practical problems rely on non-Euclidean data. There is
the case, for example, detection and recommendation in social
networks [4], resource allocations over wireless networks [5],
point clouds for shape segmentation [6]. There have been works
that extend the CNN architecture to non-Euclidean domains
[7]–[9], which reproduce the success of CNNs in Euclidean
domains. Among these models, graphs are commonly used
to construct the underlying data structure, while the graph
size scales with the amount of data. In this work, we aim to
construct CNNs on this more general model – the manifold.
Supported by NSF CCF 1717120, Theorinet Simons and ARL DCIST
CRA under Grant W911NF-17-2-0181. Zhiyang and Alejandro are with
Department of Electrical and Systems Engineering, University of Pennsylvania,
Philadelphia, Pennsylvania, USA. Luana is with Simons-Berkeley Institute,
California, USA.
Graphs with well-defined limits are shown to converge to a
manifold model [10], [11], which makes the manifold capable
of capturing properties for a series of graphs. The convolution
operation is not taken for granted in non-Euclidean domains
due to the lack of global parametrization and shift invariance.
We define a manifold convolution operation based on the heat
diffusion process controlled by the Laplace-Beltrami operator.
We construct a manifold convolutional filter to process manifold
signals. By cascading the layers consisting of manifold filter
banks and nonlinearities, we can define the manifold neural
networks (MNNs) as a deep learning framework on the
manifold. To motivate the practical implementations of our
proposed MNNs, we first discretize the MNN in the space
domain by sampling points on the manifold. The proposed
MNN can be transferred to this discretized manifold as a
discretized MNN which converges to the underlying MNN
when the manifold signal is bandlimited. We further carry
out discretization in the time domain by sampling the filter
impluse function in discrete and finite time steps. In this way,
we can not only execute our proposed MNNs, but also recover
the graph convolutions and graph neural networks [7]. This
concludes our thought starting from a graph sequence to the
limit as a a manifold and back to the graphs. We finally verify
the performance of our proposed MNN with a point cloud
based model classification problem.
Related works include neural networks built on graphons
[12], [13], which are limits of a sequence of dense graphs.
Different from manifolds, graphons only represent the limits
for graphs with unbounded degrees [14]. Stability of MNNs
have been studied considering the perturbations to the Laplace-
Beltrami operator [9], [15]. A general framework for algebraic
neural networks has been proposed for architectures unified
with commutative algebras [16].
The rest of the paper is organized as follows. We start
with some preliminary concepts and define the manifold
convolutions in Section II. We construct the MNNs based
on manifold filters in Section III. In Section IV, we implement
the discretization in space and time domains to make the MNNs
realizable which also bring back to graph convolutions. Our
proposed MNN is verified in a model classification problem
in Section V. The conclusions are presented in Section VI.
II. MANIFOLD CONVOLUTION
A. Preliminary Definitions
In this paper, we consider a compact, smooth, and differen-
tiable
d
-dimensional submanifold
M
embedded in
RN
. The
embedding induces a Riemannian structure [17] on
M
which
arXiv:2210.00376v1 [eess.SP] 1 Oct 2022
endows a measure
µ
over the manifold. Manifold signals
supported on
M
are smooth scalar functions
f:M → R
.
We consider manifold signals in a Hilbert space in which we
define the inner product as
hf, giL2(M)=ZM
f(x)g(x)dµ(x)(1)
with the norm defined as kfk2
L2(M)=hf, fiL2(M).
The manifold is locally Euclidean, which elicits intrinsic
gradient for differentiation as a local operator [11]. The
local Euclidean space around
x∈ M
containing all of
the vectors tangent to
M
at
x
is denoted as tangent space
TxM
. We use
TM
to represent the disjoint union of all
tangent spaces on
M
. The intrinsic gradient can thus be
written as an operator
:L2(M)L2(TM)
mapping
scalar functions to tangent vector functions on
M
. The
adjoint operator of intrinsic gradient is the intrinsic divergence
defined as
div :L2(TM)L2(M)
. Based on these two
differentiation operators, the Laplace-Beltrami (LB) operator
L:L2(M)L2(M)
can be defined as the intrinsic
divergence of the intrinsic gradient [18], formally as
Lf=div ◦ ∇f=−∇ · ∇f. (2)
Similar to the Laplacian operator in Euclidean domains or the
Laplace matrix in graphs [19], the LB operator evaluates how
much the function value at point
x
differs from the average
function value of its neighborhood [11].
The LB operator provides a basis for expressing and solving
physical tasks by Partial Differential Equations (PDEs). One of
the remarkable applications is characterizing the heat diffusion
over manifolds by the heat equation
u(x, t)
t +Lu(x, t)=0, (3)
where
u(x, t)L2(M)
measures the temperature at
x∈ M
at time
tR+
. With initial condition given by
u(x, 0) = f(x)
,
the solution can be expressed as
u(x, t) = etLf(x), (4)
which provides an essential element to construct manifold
convolution in the following.
Due to the compactness of
M
, the LB operator
L
is self-
adjoint and positive-semidefinite. This means that
L
possesses
a real positive spectrum
{λi}
i=1
with the eigenvalues
λi
and
the corresponding eigenfunctions φisatisfying
Lφi=λiφi(5)
The eigenvalues are ordered as
0< λ1λ2. . .
.
According to Weyl’s law [20], we have
λii2/d
for a
d
-
dimensional manifold. The orthonormal eigenfunctions
φi
form
a general eigenbasis of L2(M)in the intrinsic sense.
Since
L
is a total variation operator, the eigenvalues
λi
can be
interpreted as the canonical frequencies and the eigenfunctions
φias the canonical oscillation modes of M.
B. Manifold Convolutional Filters
Convolution operation is a powerful technique to give zero
state response to any input signal with the filter impulse re-
sponse of the system [21]. Similar to time signals are processed
with time convolutions and graph signals are processed by graph
convolutions [22], we define manifold convolution with a filter
impulse response ˜
hand manifold signal f.
Definition 1 (Manifold filter)
Let
˜
h:R+R
and let
f
L2(M)
be a manifold signal. The manifold filter with impulse
response ˜
h, denoted h, is given by
g(x)=(hf)(x) := Z
0
˜
h(t)u(x, t)dt(6)
where
u(x, t)
is the solution of the heat equation
(3)
with
u(x, 0) = f(x)
. Substitute the solution
u(x, t)
with
(4)
, and
we can derive a parametric form of has
g(x) = (hf)(x) = Z
0
˜
h(t)etLf(x)dt=h(L)f(x).(7)
Manifold filters are local spatial operators operating directly
on points on the manifold based on the LB operator. The
exponential term
etL
can be interpreted as a shift operator
like the time delay in a Linear-Time Invariant (LTI) filter [21]
and the graph shift in a Linear-Shift Invariant (LSI) graph
filter [22]. In fact, manifold filters can recover graph filters by
discretization, which we discuss thoroughly in Section IV.
The LB operator
L
possesses the eigendecomposition
{λi, φi}
i=1
. Eigenvalue
λi
can be interpreted as the canonical
frequency and the eigenfunction
φi
as the canonical oscillation
mode. By projecting a manifold signal
f
onto the eigenfunction,
we can write the frequency representation ˆ
fas
[ˆ
f]i=hf, φiiL2(M)=ZM
f(x)φi(x)dµ(x). (8)
Definition 2 (Bandlimited manifold signals)
A manifold
signal is defined as
λM
-bandlimited with
λM>0
if
[ˆ
f]i= 0
for all isuch that λi> λM.
The spectrum and eigenbasis of the LB opertor help to
understand the frequency behavior of the manifold filter
h(L)
.
The frequency representation of manifold filter output
g
can
be similarly written as
[ˆg]i=ZMZ
0
˜
h(t)etLf(x)dtφi(x)dµ(x). (9)
By substituting etLφi=eiφi, we can get
[ˆg]i=Z
0
˜
h(t)eidt[ˆ
f]i. (10)
The function solely dependent on
λi
is defined as the frequency
response of the filter h(L).
Definition 3 (Frequency response)
The frequency response
of the filter h(L)is given by
ˆ
h(λ) = Z
0
˜
h(t)edt,(11)
摘要:

ConvolutionalNeuralNetworksonManifolds:FromGraphsandBackZhiyangWangLuanaRuizAlejandroRibeiroAbstract—Geometricdeeplearninghasgainedmuchattentioninrecentyearsduetomoreavailabledataacquiredfromnon-Euclideandomains.Someexamplesincludepointcloudsfor3Dmodelsandwirelesssensornetworksincommunications.Graph...

展开>> 收起<<
Convolutional Neural Networks on Manifolds From Graphs and Back Zhiyang Wang Luana Ruiz Alejandro Ribeiro.pdf

共7页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:7 页 大小:751.38KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 7
客服
关注