2
set bounds on the scalability of any algorithm. With
the advent of quantum error correction (QEC) (Calder-
bank and Shor,1996;Shor,1995;Steane,1996), this
challenge has been solved at least in theory. The cele-
brated threshold theorem (Aharonov and Ben-Or,1997;
Kitaev,1997) showed that if errors in the quantum hard-
ware could be reduced below a finite rate, known as the
threshold, a fault-tolerant quantum computation could
be carried out for arbitrary length even on noisy hard-
ware. However, besides the technical challenge of build-
ing hardware that achieves the threshold, the implemen-
tation of a fault-tolerant universal gate set with current
codes, such as the surface code (Fowler et al.,2012), gen-
erates a qubit overhead that seems daunting at the mo-
ment. For example, recent optimised approaches show
that scientific applications that are classically intractable
may require hundreds of thousands of qubits (Kivlichan
et al.,2020), while industrial applications will require
millions of qubits (Lee et al.,2021). There is ongoing
theoretical research to find alternative codes with a more
favourable overhead, and recent progress gives reasons to
be optimistic (Breuckmann and Eberhardt,2021;Dinur
et al.,2022;Gottesman,2014;Panteleev and Kalachev,
2022). Nevertheless the challenge of realising full-scale
fault-tolerant quantum computing is a considerable one.
This of course motivates the question of whether other
approaches, prior to the era of fully fault-tolerant sys-
tems, might achieve quantum advantage with significant
practical impacts. One might hope so, given the con-
tinual and remarkable progress that has been made in
quantum computational hardware. In recent years, it
has become routine to see reports of experiments demon-
strating high-quality control over multiple qubits [see e.g.
Asavanant et al. (2019); Ebadi et al. (2021); Jurcevic
et al. (2021); Madjarov et al. (2020); and Xue et al.
(2022)], some even reaching beyond 50 qubits [see e.g.
Arute et al. (2019) and Wu et al. (2021)]. Meanwhile
other experiments have indeed demonstrated early-stage
fault-tolerant potentials [see e.g. Abobeih et al. (2022);
Egan et al. (2021); Google Quantum AI (2023); Krinner
et al. (2022); Postler et al. (2022); Ryan-Anderson et al.
(2022); and Takeda et al. (2022)]. Of course, the works
mentioned here are far from exhaustive as it is impos-
sible to capture the all the breakthroughs on different
fronts across the diverse range of platforms, we refer the
reader to Ac´ın et al. (2018) and Altman et al. (2021) and
the references therein for the key milestones in different
platforms.
The primary goal of quantum error mitigation (QEM)
is to translate this continuous progress in quantum hard-
ware into immediate improvements for quantum infor-
mation processing. While accepting that the hardware
imperfections will limit the complexity of quantum algo-
rithms, nevertheless we can expect that every advance
should enable this boundary to be pushed further. As
this review will demonstrate, the mitigation approach
indeed proves to be both practically effective and quite
fascinating as an intellectual challenge.
When exploring the prospects for achieving quantum
advantage through error mitigation, it is crucial to con-
sider suitable forms of circuits. It is understood that
in the era of noisy, intermediate-scale quantum (NISQ)
devices, only certain approaches may be able to achieve
meaningful and useful results. Due to the limited coher-
ence times and the noise floor present in quantum hard-
ware, one typically resorts to the idea of quantum com-
putation with short-depth circuits. Motivating examples
include variational quantum circuits in physics simula-
tions (McClean et al.,2016;Peruzzo et al.,2014;Wecker
et al.,2015), approximate optimisation algorithms (Farhi
et al.,2014), and even heuristic algorithms for quantum
machine learning (Biamonte et al.,2017). Typically in
applications of these kinds, the algorithm can be under-
stood as applying a short-depth quantum circuit to a
simple initial state and then estimating the expectation
value of a relevant observable. Such expectation values
ultimately lead to the output of the algorithm, which
must be accurate enough to be useful in some context
(for example, for estimating the energies of molecular
states, a useful level of chemical accuracy corresponds
to 1kcal/mol (Helgaker et al.,2000)). This leads to the
most essential feature of QEM: the ability to minimise the
noise-induced bias in expectation values on noisy hard-
ware. However, this can also be achieved by QEC and
many other long-established tools like decoherence-free
subspaces and dynamical decoupling sequences (derived
from optimal quantum control) (Lidar,2014;Suter and
´
Alvarez,2016). Therefore this feature alone is not suf-
ficient to capture the QEM techniques that we wish to
cover in this review.
It is challenging to find a universally acceptable def-
inition of quantum error mitigation. For the purposes
of this review, we will define the term ‘quantum error
mitigation’ as algorithmic schemes that reduce the noise-
induced bias in the expectation value by post-processing
outputs from an ensemble of circuit runs, using circuits
at the same noise level as the original unmitigated cir-
cuit or above. That is to say QEM will only reduce the
effective damage due to noise for the whole ensemble of
circuit runs (with the help of post-processing), but when
we zoom into each individual circuit run, the circuit noise
level remains unchanged or even increases. This is in
contrast to the other techniques like QEC which aim to
reduce the effect of noise on the output in every single
circuit run.
Since QEM performs post-processing using data di-
rectly from noisy hardware, it will become impractical
if the amount of noise in the whole circuit is so large
that it completely damages the output. In practice, this
usually means that for a given hardware set-up, there is a
maximum circuit size (circuit depth times qubit number)
beyond which QEM will become impractical, usually due