
A super-polynomial quantum-classical separation for density modelling
Niklas Pirnay,1Ryan Sweke,2, ∗Jens Eisert,2, 3 and Jean-Pierre Seifert1, 4
1Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, 10587, Germany
2Dahlem Center for Complex Quantum Systems,
Freie Universität Berlin, Berlin, 14195, Germany
3Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany
4Fraunhofer SIT, D-64295 Darmstadt, Germany
(Dated: October 28, 2022)
Density modelling is the task of learning an unknown probability density function from samples,
and is one of the central problems of unsupervised machine learning. In this work, we show that there
exists a density modelling problem for which fault-tolerant quantum computers can offer a super-
polynomial advantage over classical learning algorithms, given standard cryptographic assumptions.
Along the way, we provide a variety of additional results and insights, of potential interest for
proving future distribution learning separations between quantum and classical learning algorithms.
Specifically, we (a) provide an overview of the relationships between hardness results in supervised
learning and distribution learning, and (b) show that any weak pseudo-random function can be used
to construct a classically hard density modelling problem. The latter result opens up the possibility
of proving quantum-classical separations for density modelling based on weaker assumptions than
those necessary for pseudo-random functions.
I. INTRODUCTION
The task of learning a representation of a probability distribution from samples is of importance in a wide variety
of contexts, from the natural sciences to industry. As such, a central focus of modern machine learning is to develop
algorithms and models for this task. Of particular importance is the distinction between density modelling and
generative modelling. In density modelling the task is to learn an evaluator for a distribution – i.e., a function which
on input of a sample returns the probability weight assigned to that sample by the underlying distribution. As such,
density modelling is sometimes referred to, as we do here, as evaluator learning. In generative modelling, the task is
to learn a generator for a distribution – i.e., a function which given a (uniformly) random input seed outputs a sample
with probabilities according to the target distribution. As a result, generative modelling is sometimes referred to as
generator learning. As an example, in a generative modelling problem the goal might be to generate images of cats
or dogs, while the associated density modelling problem would be to evaluate the probability that an image depicts a
cat or a dog. It is important to stress that these two learning tasks are indeed fundamentally different. That is to say,
efficiently learning a generator for a distribution does not imply efficiently learning an evaluator and vice versa [1].
Given the incredible success of modern machine learning, and the rapidly increasing availability of quantum compu-
tational devices, a natural question is whether or not quantum devices can provide any advantage in this domain [2–5].
Most current research in this direction is of a heuristic nature [6, 7]. However, there also exists an emerging body
of results which provide examples of machine learning tasks for which one can prove rigorously a meaningful separa-
tion between the power of classical and quantum computational devices [8–11], even though results on such rigorous
separations are still rather scarce [12]. One of these, the work of Ref. [11] has shown rigorously that one can obtain
a quantum advantage in generative modelling by constructing a distribution class which is (a) provably hard to gen-
erator learn classically, but (b) efficiently generator learnable using a fault-tolerant quantum computer. Importantly
however, Ref. [11] has not addressed the related task of density modelling.
In this work, we close this gap, by showing that the class of probability distributions constructed in Ref. [11] in fact
also allows one to demonstrate a super-polynomial quantum-classical separation for density modelling. Additionally,
along the way we provide a variety of insights and additional results, which may be of independent interest for
constructing future quantum-classical separations in distribution learning. More specifically, in this work we do the
following:
1. Any quantum-classical separation requires a proof of classical hardness. The generative modelling separation
of Ref. [11] relies crucially on a result from Ref. [1] which shows that from any pseudo-random function (PRF)
one can construct a distribution class which is provably hard to generator learn classically. We strengthen this
∗Currently at IBM Quantum, Almaden Research Center, San Jose, CA 95120, USA.
arXiv:2210.14936v1 [quant-ph] 26 Oct 2022