
A Continuous Convolutional Trainable Filter for Modelling
Unstructured Data
Dario Coscia∗1, Laura Meneghetti†1, Nicola Demo‡1, Giovanni Stabile§2,1, and
Gianluigi Rozza¶1
1Mathematics Area, mathLab, SISSA, via Bonomea 265, I-34136, Trieste, Italy
2Department of Pure and Applied Sciences, Informatics and Mathematics Section,
University of Urbino Carlo Bo, Piazza della Repubblica 13, I-61029, Urbino, Italy
May 26, 2023
Abstract
Convolutional Neural Network (CNN) is one of the most important architectures in deep
learning. The fundamental building block of a CNN is a trainable filter, represented as a
discrete grid, used to perform convolution on discrete input data. In this work, we propose a
continuous version of a trainable convolutional filter able to work also with unstructured data.
This new framework allows exploring CNNs beyond discrete domains, enlarging the usage of
this important learning technique for many more complex problems. Our experiments show
that the continuous filter can achieve a level of accuracy comparable to the state-of-the-art
discrete filter, and that it can be used in current deep learning architectures as a building
block to solve problems with unstructured domains as well.
1 Introduction
In the deep learning field, a convolutional neural network (CNN) [28] is one of the most important
architectures, widely used in academia and industrial research. For an overview of the topic,
the interested reader might refer to [30, 16, 2, 5, 52]. Despite the great success in many fields
including, but not limited, to computer vision [26, 40, 22] or natural language processing [50, 11],
current CNNs are constrained to structural data. Indeed, the basic building block of a CNN is
a trainable filter, represented by a discrete grid, which performs cross-correlation, also known as
convolution, on a discrete domain. Nevertheless, the idea behind convolution can be easily extended
mathematically to unstructured domains, for reference see [18]. One possible approach for this kind
of problem is the graph neural networks (GNN) [24, 49], where a graph is built starting from the
topology of the discretized space. This allows us to apply convolution even to unstructured data by
looking at the graph edges, bypassing in this way the limitations of the standard CNNs approach.
However, GNNs typically require huge computational resources, due to their implicit complexity.
Instead in this article, we present a methodology to apply CNNs to unstructured data by intro-
ducing a continuous extension of a convolutional filter, named continuous filter, without modeling
the data using a graph. The main idea, which is depicted graphically in Figure 1, relies on ap-
proximating the continuous filter with a trainable function using a feed-forward neural network
and perform standard continuous convolution between the input data and the continuous filter.
Previous works have introduced different approaches to continuous convolution in various settings
ranging from informatics and graph neural networks to physics and modeling quantum interac-
tions, see for example [39, 41, 4]. Even so, the latter is difficult to generalize, and an analogy
with a discrete CNN filter is not straightforward. To our extent [48, 36] are the closest works in
∗dario.coscia@sissa.it
†laura.meneghetti@sissa.it
‡nicola.demo@sissa.it
§giovanni.stabile@uniurb.it
¶gianluigi.rozza@sissa.it
1
arXiv:2210.13416v3 [cs.LG] 25 May 2023