Learning to predict arbitrary quantum processes
Hsin-Yuan Huang,1Sitan Chen,2and John Preskill1, 3
1Institute for Quantum Information and Matter and
Department of Computing and Mathematical Sciences, Caltech, Pasadena, CA, USA
2Department of Electrical Engineering and Computer Sciences, UC Berkeley, Berkeley, CA, USA
3AWS Center for Quantum Computing, Pasadena, CA, USA
(Dated: April 18, 2023)
We present an efficient machine learning (ML) algorithm for predicting any unknown quantum
process ℰover 𝑛qubits. For a wide range of distributions 𝒟on arbitrary 𝑛-qubit states, we show that
this ML algorithm can learn to predict any local property of the output from the unknown process ℰ,
with a small average error over input states drawn from 𝒟. The ML algorithm is computationally
efficient even when the unknown process is a quantum circuit with exponentially many gates. Our
algorithm combines efficient procedures for learning properties of an unknown state and for learning
a low-degree approximation to an unknown observable. The analysis hinges on proving new norm
inequalities, including a quantum analogue of the classical Bohnenblust-Hille inequality, which we
derive by giving an improved algorithm for optimizing local Hamiltonians. Numerical experiments
on predicting quantum dynamics with evolution time up to 106and system size up to 50 qubits
corroborate our proof. Overall, our results highlight the potential for ML models to predict the
output of complex quantum dynamics much faster than the time needed to run the process itself.
I. INTRODUCTION
Learning complex quantum dynamics is a fundamental problem at the intersection of machine learning
(ML) and quantum physics. Given an unknown 𝑛-qubit completely positive trace preserving (CPTP) map ℰ
that represents a physical process happening in nature or in a laboratory, we consider the task of learning to
predict functions of the form
𝑓(𝜌, 𝑂) = tr(𝑂ℰ(𝜌)),(1)
where 𝜌is an 𝑛-qubit state and 𝑂is an 𝑛-qubit observable. Related problems arise in many fields of research,
including quantum machine learning [1–10], variational quantum algorithms [11–17], machine learning for
quantum physics [18–29], and quantum benchmarking [30–36]. As an example, for predicting outcomes of
quantum experiments [8,37,38], we consider 𝜌to be parameterized by a classical input 𝑥,ℰis an unknown
process happening in the lab, and 𝑂is an observable measured at the end of the experiment. Another example
is when we want to use a quantum ML algorithm to learn a model of a complex quantum evolution with the
hope that the learned model can be faster [7,11,12].
As an 𝑛-qubit CPTP map ℰconsists of exponentially many parameters, prior works, including those based
on covering number bounds [4,7,8,37], classical shadow tomography [33,39], or quantum process tomography
[30–32], require an exponential number of data samples to guarantee a small constant error for predicting
outcomes of an arbitrary evolution ℰunder a general input state 𝜌. To improve upon this, recent works
[4,7,8,37,40] have considered quantum processes ℰthat can be generated in polynomial-time and shown
that a polynomial amount of data samples suffices to learn tr(𝑂ℰ(𝜌)) in this restricted class. However, these
results still require exponential computation time.
In this work, we present a computationally-efficient ML algorithm that can learn a model of an arbitrary
unknown 𝑛-qubit process ℰ, such that given 𝜌sampled from a wide range of distributions over arbitrary 𝑛-qubit
states and any 𝑂in a large physically-relevant class of observables, the ML algorithm can accurately predict
𝑓(𝜌, 𝑂) = tr(𝑂ℰ(𝜌)). The ML model can predict outcomes for highly entangled states 𝜌after learning from
a training set that only contains data for random product input states and randomized Pauli measurements
on the corresponding output states. The training and prediction of the proposed ML model are both efficient
even if the unknown process ℰis a Hamiltonian evolution over an exponentially long time, a quantum circuit
with exponentially many gates, or a quantum process arising from contact with an infinitely large environment
for an arbitrarily long time. Furthermore, given few-body reduced density matrices (RDMs) of the input state
𝜌, the ML algorithm uses only classical computation to predict output properties tr(𝑂ℰ(𝜌)).
The proposed ML model is a combination of efficient ML algorithms for two learning problems: (1) predict-
ing tr(𝑂𝜌)given a known observable 𝑂and an unknown state 𝜌, and (2) predicting tr(𝑂𝜌)given an unknown
observable 𝑂and a known state 𝜌. We give sample- and computationally-efficient learning algorithms for
both problems. Then we show how to combine the two learning algorithms to address the problem of learn-
ing to predict tr(𝑂ℰ(𝜌)) for an arbitrary unknown 𝑛-qubit quantum process ℰ. Together, the sample and
computational efficiency of the two learning algorithms implies the efficiency of the combined ML algorithm.
In order to establish the rigorous guarantee for the proposed ML algorithms, we consider a different task:
optimizing a 𝑘-local Hamiltonian 𝐻=𝑃∈{𝐼,𝑋,𝑌,𝑍}⊗𝑛𝛼𝑃𝑃. We present an improved approximate opti-
arXiv:2210.14894v3 [quant-ph] 15 Apr 2023