TANGENT BUNDLE FILTERS AND NEURAL NETWORKS:
FROM MANIFOLDS TO CELLULAR SHEAVES AND BACK
C. Battiloro1,2, Z. Wang1, H. Riess3, P. Di Lorenzo2, A. Ribeiro1
1ESE Department, University of Pennsylvania, Philadelphia, USA
2DIET Department, Sapienza University of Rome, Rome, Italy
3ECE Department, Duke University, Durham, USA
E-mail: claudio.battiloro@uniroma1.it, zhiyangw@seas.upenn.edu
ABSTRACT
In this work we introduce a convolution operation over the tangent
bundle of Riemannian manifolds exploiting the Connection Lapla-
cian operator. We use the convolution to define tangent bundle fil-
ters and tangent bundle neural networks (TNNs), novel continuous
architectures operating on tangent bundle signals, i.e. vector fields
over manifolds. We discretize TNNs both in space and time do-
mains, showing that their discrete counterpart is a principled vari-
ant of the recently introduced Sheaf Neural Networks. We formally
prove that this discrete architecture converges to the underlying con-
tinuous TNN. We numerically evaluate the effectiveness of the pro-
posed architecture on a denoising task of a tangent vector field over
the unit 2-sphere.
Index Terms—Geometric Deep Learning, Tangent Bundle Sig-
nal Processing, Tangent Bundle Neural Networks, Cellular Sheaves
1. INTRODUCTION
The success of deep learning is mostly the success of Convolutional
Neural Networks (CNNs) [1]. CNNs have achieved impressive per-
formance in a wide range of applications showing good generaliza-
tion ability. Based on shift operators in the space domain, one (but
not the only one) key attribute is that the convolutional filters sat-
isfy the property of shift equivariance. Nowadays, data defined on
irregular (non-Euclidean) domains are pervasive, with applications
ranging from detection and recommendation in social networks pro-
cessing [2], to resource allocations over wireless networks [3], or
point clouds for shape segmentation [4], just to name a few. For
this reason, the notions of shifts in CNNs have been adapted to
convolutional architectures on graphs (GNNs) [5, 6] as well as a
plethora of other structures, e.g. simplicial complexes [7–10], cell
complexes [11, 12], and manifolds [13]. In [14], a framework for al-
gebraic neural networks has been proposed exploiting commutative
algebras. In this work we focus on tangent bundles, a formal tool for
describing and processing vector fields on manifolds, which are key
elements in tasks such as robot navigation or flocking modeling.
Related Works. The renowned manifold assumption states that high
dimensional data examples are sampled from a low-dimensional
Riemannian manifold. This assumption is the fundamental block
of manifold learning, a class of methods for non-linear dimension-
ality reduction. Some of these methods approximate manifolds
with k-NN or geometric graphs via sampling points, i.e., for a fine
enough sampling resolution, the graph Laplacian of the approxi-
mating graph “converges” to the Laplace-Beltrami operator of the
manifold [15]. These techniques rely on the eigenvalues and eigen-
vectors of the graph Laplacian [16], and they give rise to a novel
perspective on manifold learning. In particular, the above approx-
imation leads to important transferability results of graph neural
networks (GNNs) [17,18], as well as to the introduction of Graphon
and Manifold Neural Networks, continuous architectures shown to
be limit objects of GNNs [19, 20]. However, most of the previ-
ous works focus on scalar signals, e.g. one or more scalar values
attached to each node of graphs or point of manifolds; recent devel-
opments [21] show that processing vector data defined on tangent
bundles of manifolds or discrete vector bundles [22, 23] comes with
a series of benefits. Moreover, the work in [24] proves that it is
possible to approximate both manifolds and their tangent bundles
with certain cellular sheaves obtained from a point cloud via k-NN
and Local PCA, such that, for a fine enough sampling resolution,
the Sheaf Laplacian of the approximating sheaf “converges” to the
Connection Laplacian operator. Finally, the work in [25] generalizes
the result of [24] by proving the spectral convergence of a large class
of Laplacian operators via the Principal Bundle set up.
Contributions. In this work we define a convolution operation over
the tangent bundles of Riemannian manifolds with the Connection
Laplacian operator. Our definition is consistent, i.e. it reduces to
manifold convolution [19] in the one-dimensional bundle case, and
to the standard convolution if the manifold is the real line. We intro-
duce tangent bundle convolutional filters to process tangent bundle
signals (i.e. vector fields over manifolds), we define a frequency
representation for them and, by cascading layers consisting of tan-
gent bundle filters banks and nonlinearities, we introduce Tangent
Bundle Neural Networks (TNNs). We then discretize the TNNs in
the space domain by sampling points on the manifold and build-
ing a cellular sheaf [26] representing a legit approximation of both
the manifold and its tangent bundle [24]. We formally prove that
the discretized architecture over the cellular sheaf converges to the
underlying TNN as the number of sampled points increases. More-
over, we further discretize the architecture in the time domain by
sampling the filter impulse function in discrete and finite time steps,
showing that space-time discretized TNNs are a principled variant
of the very recently introduced Sheaf Neural Networks [23, 27, 28],
discrete architectures operating on cellular sheaves and generalizing
graph neural networks. Finally, we numerically evaluate the perfor-
mance of TNNs on a denoising task of a tangent vector field of the
unit 2-sphere.
Paper Outline. The paper is organized as follows. We start with
some preliminary concepts in Section 2. We define the tangent bun-
dle convolution and filters in Section 3, and Tangent Bundle Neural
Networks (TNNs) in Section 4. In Section 5, we discretize TNNs in
space and time domains, showing that discretized TNNs are Sheaf
Neural Networks and proving the convergence result. Numerical re-
sults are in Section 6 and conclusions are in Section 7.
arXiv:2210.15058v3 [eess.SP] 18 Nov 2022