
3
scales polynomially with log(N)and the condition num-
ber, κ, of the matrix A[13]. This provides an exponen-
tial speedup over the best classical approaches when κis
small, such as when an effective preconditioner is used.
However, the quantum circuit requirements of HHL—
when applied to problems of even moderate size—are
well-beyond the capabilities of currently available quan-
tum hardware [27]. This is largely because HHL utilizes
complex subroutines, such as Quantum Phase Estima-
tion, which require qubits that operate with almost no
quantum noise or error. On NISQ hardware, HHL is thus
impractical for systems of interest; the largest system
solved to date using HHL is of dimension 16 ×16 [28–
33]. Once large fault-tolerant quantum computers are
developed, the exponential speedup offered by HHL (and
variations/improvements thereon) could play a critical
role in advancing subsurface flow modeling.
In the interim, progress in QLSP algorithms has oc-
curred in two directions. The first is to tailor QLSP
algorithms to the strengths and weaknesses of current
NISQ computers, such as the algorithm we present in
Sec. II B does [26,29,34–36]. The second is to design
algorithms that are still intended for fault-tolerant com-
puters, but which do not rely on as many complex sub-
routines as HHL and thus may perform adequately on
NISQ devices [37–40]. One such example is the adiabatic
approach of Subasi, Somma, and Orsucci (SSO) [37].
This approach requires only a single subroutine, known
as Hamiltonian simulation, while still offering the equiv-
alent quantum speed-up of HHL.
Before embarking on the purely NISQ-oriented ap-
proach of Sec. II B, we tested the SSO algorithm on a
collection of very simple subsurface flow problems to as-
sess how well current hardware could handle one fault-
tolerant algorithm. As described in Sec. I, the problem
was to compute pressures of a one-dimensional grid of
either N= 4 or N= 8 nodes. (See subfigures (c) and
(d) of Fig. 2for a cartoon visualization.) Pressures on
the boundaries were fixed, and the answer to the QLSP
encoded the internal pressures.
The computational complexity and resulting accuracy
of the SSO algorithm depend upon a unitless, user-
defined parameter, q, which is connected to how long
the algorithm is allowed to run. We showed that—up to
a point—the algorithm returned better results as qin-
creased; for both N= 4 and N= 8 problems running
on a noiseless quantum simulator, the error kAx−bk
approached 0 for q= 104. On the quantum hardware,
the average error after an equivalent time was approxi-
mately 0.21 for an N= 4 problem and 0.54 for an N= 8
problem. (Note that for these problems, kbk= 1.)
Fig. 2illustrates these results and two noteworthy
points. First, the N= 8 problem exhibited a clear limit
to how much the hardware results would improve with
increasing q. Indeed, despite increasing qby four orders
of magnitude, the average error when run on the quan-
tum hardware decreased by only about 0.2. This suggests
that—on NISQ-era devices—SSO’s utility is limited even
for problems with as few as 8 nodes.
Second, Fig. 2compares the errors achieved on quan-
tum simulators and hardware to the error when obtain-
ing a result from the a quantum state known as the
maximally-mixed state. This comparison contextualizes
the quality of the errors achieved by SSO, because the
maximally-mixed state corresponds to a state where noise
has destroyed all information in the quantum system, and
thus can be characterized as one of random information.
Specifically, for the fracture flow LSPs we solved, obtain-
ing a result from a quantum computer in the maximally-
mixed state is equivalent to obtaining any of the possible
states with equal probability. (In other words, a result
from a quantum computer in the maximally-mixed state
is a ‘solution’ chosen at random from a uniform distribu-
tion of all possible solutions.) Such a ‘solution’ is thus not
meaningful, because any accuracy is due to randomness,
and not to the performance of the SSO algorithm.
Fig. 2illustrates that the SSO algorithm offered very
little improvement upon such a randomly-determined so-
lution. For example, in the N= 8 case, the hardware
results offered an improvement of just about 24%: the
result from the maximally-mixed state had an error of
0.71, the quantum hardware achieved an average error
of 0.54, and so the improvement due to SSO was solely
0.71−0.54
0.71 = 0.24.
The fact that SSO’s performance on such small prob-
lems was so limited illustrates that, although fault-
tolerant algorithms like HHL and SSO have significant
promise, the noise on contemporary devices is too high
for accurately solving even very small problems using
these methods.
B. An Algorithm for the Near-Term Era
An alternative to fault-tolerant algorithms are those
designed to operate in the NISQ regime, often by leverag-
ing robust classical computing alongside quantum hard-
ware. Variational Quantum Algorithms (VQAs) [22,23,
25] encode a task of interest—in our case, solving a lin-
ear system—in an optimization problem. In these al-
gorithms, the classical computer steers the optimization
process while the quantum computer computes a cost
function, which is being optimized. The goal is to train a
parameterized quantum circuit such that the parameters
minimizing the cost function are also those that cause the
circuit to compute the solution to the problem of inter-
est. There are multiple approaches to solving the QLSP
in near-term devices [26,34,35]; we focus on the Vari-
ational Linear Solver (VLS) algorithm of Ref. [26]. The
VLS algorithm trains parameters in a quantum circuit
such that, when a cost function is minimized, the solu-
tion encoded by the trained circuit is proportional to the
solution xof the LSP.
We employed the VLS algorithm to determine pres-
sures at each node in a discretized model of the subsur-
face. With VLS, we can currently tackle much more com-