
2
sors [30–32], as a control tool for quantum state prepa-
ration [33] and to extend quantum coherence of a qubit
by manipulating the environment [34–36].
Despite these pioneering experiments, several impor-
tant methodological questions still remain open. A prior-
ity concern is that adaptive protocols introduce an over-
head, given by the time required to compute settings on
the fly for the next iteration. It is crucial to minimise this
computation time, since it can slow the protocol down to
the point that the overhead can reverse the gain in mea-
surement speed compared to a simple parameter sweep.
This has not been considered in many cases, in particu-
lar where algorithms were investigated through computer
simulations [21, 22, 27, 37] or as off-line processing of
pre-existing experimental data [23]. While the optimi-
sation of complex utility functions can possibly deliver
the best theoretical results, this could be practically less
advantageous than near-optimal approaches with very
fast update rules in minimising total measurement du-
ration. A second issue is related to the fact that, for
multi-parameter Hamiltonian estimation, standard ap-
proaches such as the maximisation of Fisher information
can fail, as the Fisher information matrix becomes sin-
gular when controlling the evolution time [38]. This has
stimulated researchers to find ad-hoc heuristics, for ex-
ample, the particle guess heuristic [23, 24, 38] for the esti-
mation of Hamiltonian terms; these heuristics, however,
do not necessarily work beyond Hamiltonian estimation.
A third question is related to what quantity should be
optimised. Previous work has targeted the minimisa-
tion of the variance of the probability distribution for the
quantity of interest [24, 39]. While this is clear when all
measurements feature the same duration, the answer is
less straightforward when adapting the probing time. If
two measurements with different probing times result in
a similar variance, the protocol should prefer the shorter
one, minimising the overall sensing time.
Here we address these open questions, presenting theo-
retical and experimental data about the adaptive estima-
tion of decoherence for a single qubit, using NV centres
as a case study. Compared to other recent investigations
of adaptive protocols [23–25], our experiments utilize a
very simple analytical update rule based on the concept
of Fisher information and the Cram´er-Rao bound. By
exploiting state-of-the-art fast electronics, we experimen-
tally perform the real-time processing in than 50µs, an
order of magnitude shorter than previous real-time ex-
periments [24], negligible compared to the duration of
each measurement. Such a short timescale makes our
approach useful for qubits where fast single-shot read-
out is available such as trapped ions [40], superconduct-
ing qubits [41] and several types of spin qubits [42–45],
and could be further shortened in future work by imple-
menting the protocols on field-programmable gate array
(FPGA) hardware.
In the case of multi-parameter estimation, previous
work on Hamiltonian estimation had pointed out that the
Cram´er-Rao bound cannot be used in the optimisation
as the Fisher information matrix is singular and cannot
be inverted [38]. Here we address this issue by utilising
multiple probing times, showing that the Fisher informa-
tion matrix can be inverted and that the corresponding
adaptive scheme provides better performance than non-
adaptive approaches. Finally, we discuss what quantity
needs to be targeted to achieve the best sensor perfor-
mance, experimentally demonstrating the superiority of
optimizing sensitivity, defined as variance multiplied by
time, over optimizing variance. As a figure of merit, sen-
sitivity encourages faster measurements.
Our work tackles these general questions using the
characterisation of decoherence as a test case. While
adaptive approaches have been investigated in the case
of phase and frequency estimation [17–24], also in rela-
tion to Hamiltonian learning [23], the case of decoherence
is much less explored, with only one work targeting the
estimation of the relaxation timescale T1[25]. Here we
provide the first complete characterisation of the three
decoherence timescales typically used in experiments (T1,
T∗
2and T2), together with the decoherence decay expo-
nent β.
II. THEORY
Decoherence and relaxation are processes induced by
the interaction of a qubit with its environment, leading
to random transitions between states or random phase
accumulation during the evolution of the qubit. These
processes are typically estimated by preparing a quan-
tum state and tracking the probability of still measuring
the initial state over time, which can be captured by a
functional form [10]
p(t)∝1
21−e−χ(t).(1)
Although the noise processes induced by interaction
with the environment can be complex, χ(t) can often be
approximated by a simple power law:
χ(t)∝t
Tχβ
,(2)
where Tχand βdepend on the specific noise process [1].
For white noise, the decay is exponential with β= 1.
For a generic 1/f qdecay, relevant for example for super-
conducting qubits, with a noise spectral density as ∝ωq,
χ(t) scales as χ(t)∝(t/Tχ)1+q[46].
In the case of a single electronic spin dipolarly coupled
to a diluted bath of nuclear spins, the decay exponents
have been thoroughly investigated, with analytical solu-
tions available for different parameter regimes [47]. If the
intra-bath coupling can be neglected, the free induction
decay of a single spin is approximately Gaussian (β= 2)
[48, 49]. The Hahn echo decay exponent T2can vary, typ-
ically between β∼1.5−4 depending on the specific bath
parameters and applied static magnetic field [47, 50].