Nonequilibrium thermodynamics of uncertain stochastic processes Jan Korbel1 2and David H. Wolpert3 2 4 5 6 1Section for Science of Complex Systems CeMSIIS Medical University of Vienna Spitalgasse 23 1090 Vienna Austria

2025-05-02 0 0 1.64MB 27 页 10玖币
侵权投诉
Nonequilibrium thermodynamics of uncertain stochastic processes
Jan Korbel1, 2 and David H. Wolpert3, 2, 4, 5, 6
1Section for Science of Complex Systems, CeMSIIS, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
2Complexity Science Hub Vienna, Josefstädter Strasse 39, 1080 Vienna, Austria
3Santa Fe Institute, Santa Fe, NM, USA
4Arizona State University, Tempe, AZ, USA
5International Center for Theoretical Physics, Trieste, Italy
6Albert Einstein Institute for Advanced Study, New York, USA
(Dated: May 23, 2023)
Stochastic thermodynamics is formulated under the assumption of perfect knowledge of all ther-
modynamic parameters. However, in any real-world experiment, there is non-zero uncertainty about
the precise value of temperatures, chemical potentials, energy spectrum, etc. Here we investigate
how this uncertainty modifies the theorems of stochastic thermodynamics. We consider two sce-
narios: in the (called effective) scenario we fix the (unknown, randomly generated) experimental
apparatus and then repeatedly observe (stochastic) trajectories of the system for that fixed appara-
tus. In contrast, in a (called phenomenological ) scenario the (unknown) apparatus is re-generated
for each trajectory. We derive expressions for thermodynamic quantities in both scenarios. We also
discuss the physical interpretation of effective (scenario) entropy production (EP), derive the effec-
tive mismatch cost, and provide a numerical analysis of the effective thermodynamics of a quantum
dot implementing bit erasure with uncertain temperature. We then analyze the protocol for moving
between two state distributions that maximize effective work extraction. Next, we investigate the
effective thermodynamic value of information, focusing on the case where there is a delay between
the initialization of the system and the start of the protocol. Finally, we derive the detailed and
integrated fluctuation theorems (FTs) for the phenomenological EP. In particular, we show how the
phenomenological FTs account for the fact that the longer a trajectory runs, the more information
it provides concerning the precise experimental apparatus, and therefore the less EP it generates.
I. INTRODUCTION
The microscopic laws of classical and quantum physics
are parameterized sets of equations that specify the evo-
lution of a closed system starting from a specific state. To
use those equations, we need to know that specific state,
we need to be sure the system is closed, and we need to
know the values of the parameters in the equations [1, 2].
Unfortunately, in many real-world scenarios, we are
uncertain about the precise state of the system, and very
often, the system is open rather than closed, subject
to uncertain interactions with the external environment.
Statistical physics accounts for these two types of un-
certainty by building on the microscopic laws of physics
in two ways. First, to capture uncertainty about the
state of the system, we replace the exact specification
of the system’s state with a probability distribution over
states. Second, to capture uncertain interactions between
the system and the external environment, we add ran-
domness to the dynamics in a precisely parameterized
form [3].
In particular, in the sub-field of classical stochastic
thermodynamics [1, 2] we model the system as a probabil-
ity distribution evolving under a continuous-time Markov
chain (CTMC) with a precisely specified rate matrix. Of-
ten in this work, we require that the CTMC obeys local
jan.korbel@meduniwien.ac.at
david.h.wolpert@gmail.com; http://davidwolpert.weebly.com
detailed balance (LDB). This means that the rate ma-
trix of the CTMC has to obey certain restrictions, which
are parameterized by the energy spectrum of the system,
the number of thermodynamic reservoirs in the external
environment perturbing the system’s dynamics, and the
temperatures and chemical potentials of those reservoirs.
Often we also allow both the Hamiltonian of the system
and the rate matrix of the associated CTMC to change
in time in a deterministic manner, perhaps coupled by
LDB. That joint trajectory is referred to as a “protocol”.
However, in addition to uncertainty about the state of
the system and uncertainty about interactions with the
external environment, there is an additional unavoidable
type of uncertainty in all real-world systems: uncertainty
about the parameters in the equations governing the dy-
namics. In the context of stochastic thermodynamics,
this means that even if we impose LDB, we will never
know the reservoir temperatures and chemical potentials
to infinite precision (often even being unsure about the
number of such reservoirs), we will never know the energy
spectrum to infinite precision, and more generally, we will
never know the rate matrix and its time dependence to
infinite precision.
At present, almost nothing is known about the thermo-
dynamic consequences of this third type of uncertainty
despite its unavoidability [4]. In this paper, we start
to fill in this gap by considering how stochastic thermo-
dynamics (and non-equilibrium statistical physics more
generally) needs to be modified to account for this third
type of uncertainty, in addition to the two types of un-
certainty it already captures.
arXiv:2210.05249v2 [cond-mat.stat-mech] 22 May 2023
2
We define an apparatus αAto be any specific set
of values of the thermodynamic parameters of an exper-
iment, including the number of reservoirs, their temper-
atures and chemical potentials, the precise initial distri-
bution over states (i.e., how the system was prepared)
the (deterministic trajectories of the) rate matrices, the
(deterministic trajectories of the) energy functions, etc.
Here and throughout, we assume that these thermody-
namic parameters are appropriately related by LDB for
any specific α. For simplicity, we also assume that for
all apparatuses, the system has the same state space, X.
Also for simplicity, we assume that all non-protocol com-
ponents of an apparatus (in particular the temperatures
and chemical potentials) do not change in time. In addi-
tion, we assume that for all α, the process takes place in
the same time interval, [ti, tf]. We write an element of
Xas x, and a trajectory of Xvalues across [ti, tf]as x
x
x.
We suppose that αis not precisely known and write
its probability measure as dPα. Physically, it may be
that we have an infinite set of apparatuses generated by
IID sampling dPα. Alternatively, dPαcould represent
Bayesian uncertainty or a detailed model of the noise in
the measuring instruments used to set the parameters in
α. (Below, we will often abuse notation/terminology and
refer to a “distribution” over apparatuses when properly
speaking, we should be couching the discussion in terms
of a probability measure.) Abusing notation, we will use
Ato denote both the random variable with values α, and
the event space of that random variable [5].
Concretely, we consider two kinds of experimental sce-
narios. Both start by sampling dPα, but they differ after
that:
I) In the effective scenario, we generate an appara-
tus by sampling dPα. For that fixed apparatus
we then generate many stochastic trajectories x
x
x.
After running all those trajectories for that fixed
apparatus, we can, if we wish, rerun the scenario,
generating another sample of the distribution over
apparatuses, which we then use to generate a new
set of stochastic trajectories. We call this the ef-
fective scenario.
Experimentally, in the effective scenario, one can
generate and then observe frequency counts of the
distribution p(x
x
x|α)for multiple random values of α,
but without ever directly observing α. For exam-
ple, the experimenter might construct a bit-eraser
experimental apparatus involving a single thermal
reservoir whose temperature is fixed throughout
the experiment but only known to the finite pre-
cision of .1K. The experimenter then runs their
experiment many times using this fixed apparatus
and collected statistics concerning the trajectories
across those experiments. They can then use those
estimates to make (perhaps Bayesian) estimates of
thermodynamic functions of a trajectory, like the
associated entropy production.
II) In the phenomenological scenario, we again gener-
ate an apparatus by sampling dPα, but the appa-
ratus cannot be fixed while we generate multiple
trajectories. Instead, in order to generate a new
trajectory we must first generate a new apparatus
by resampling dPα. We call this the phenomeno-
logical scenario.
To illustrate the phenomenological scenario we can
return to the example where the experimenter con-
structs a bit-eraser experimental apparatus involv-
ing a single thermal reservoir whose temperature
at the beginning of the experiment is only known
to some finite precision of .001K. Suppose though
that the temperature is very slowly drifting ran-
domly in time. Any given run of the experiment
is very fast on the timescale of that drift, so we
can treat the temperature as fixed throughout the
run. However, after generating a trajectory by
running the experiment, it takes a long time for
the system to be reinitialized to rerun the experi-
ment, and during that time the temperature has
drifted to a new value that is statistically inde-
pendent of the value during the preceding run.
As in the effective scenario, the experimenter runs
their experiment many times and collected statis-
tics concerning (functions of) the trajectories across
those experiments. However, in the phenomeno-
logical scenario, one can only observe frequency
counts of the α-averaged distribution over trajec-
tories, ¯p(x
x
x) := RdPαp(x
x
x|α).
Illustrations of both scenarios are depicted in Fig. 1, for
a simple three-state time-homogeneous system coupled
to one of the three possible apparatuses. In each case,
we measure the marginal probability distribution at the
final time tf. Note that in neither scenario do we allow
any direct measurement of α. However, there may be
indirect information about αthat arises from the precise
trajectory of states that is generated once αis chosen.
Crucially, the ensemble-level thermodynamic quanti-
ties generated in these two scenarios can differ, since the
two types of average involved (once over α, once over x)
do not necessarily commute. As an example, in the effec-
tive scenario, since we can form an estimate of pt(x|α)by
running many iterations of a fixed experimental appara-
tus, we can experimentally estimate the entropy defined
as
¯
S(Pt) = ZdPαX
x
pt(x|α) ln pt(x|α)(1)
using empirical frequency counts.
This is not possible in the phenomenological scenario,
in which we can only experimentally estimate a more
“coarse-grained” version of entropy,
S(¯
Pt) = X
xZdPαpt(x|α)ln ZdPαpt(x|α)(2)
3
E0E1
E2
w12
w20
w10
w01
w21
w02
E2
α1
p(α1)
α2
p(α2)
α3
p(α3)
(a)3-state system coupled to one of the apparatuses
t
i
t
f
E0
E1
E2
Trajectories
ptf(E)=?
(b)distribution at time tf
tit
f
E0
E1
E2
α1ptf(E|α1)
(c)Effective scenario: α1
tit
f
E0
E1
E2
α2ptf(E|α2)
(d): Effective scenario: α2
tit
f
E0
E1
E2
α3ptf(E|α3)
(e)Effective scenario: α3
FIG. 1. Comparison of the two uncertain apparatus scenarios considered in this paper. a) A simple, a three-state system that
can be coupled to one of the three apparatuses, with respective probabilities. b) We will plot trajectories of the values of a
quantity Ethat has three possible values across the time interval [ti, tf], along with the associated empirical estimate of the
relative probabilities that the system had each of those three values during [ti, tf]c),d),e) Plots for the effective scenario, where
we can estimate the marginal distribution at time tfptf(E|α)for a fixed apparatus α, shown for three separate instances of
the scenario corresponding to the three possible apparatuses. f) The phenomenological scenario, in which each trajectory is
sampled with a different apparatus, and therefore only ¯ptf(E)can be estimated.
using empirical frequency counts.
Note also that if we are in the effective scenario and
have the ability to force a new apparatus to be (ran-
domly) generated whenever we want, then we can imple-
ment the phenomenological scenario just by forcing a new
apparatus to be generated after every run. In this aug-
mented version of the effective scenario, we could experi-
mentally estimate the quantity in Eq. (2) using empirical
frequency counts. However, if we are in the effective sce-
nario and do not have this extra ability, then we cannot
estimate the quantity in Eq. (2), only the quantity in
Eq. (1). (In this paper, whenever we discuss the effec-
tive scenario, we will assume we do not have this extra
ability.)
The difference between the thermodynamics of the two
scenarios will be a central focus of our analysis below.
Related research
It is important to distinguish between the focus of this
paper and some of the issues that have been investigated
in the recent literature. Some recent research has con-
sidered how to modify stochastic thermodynamics if the
experimentalist is not able to view all state transitions in
the system as it evolves [6, 7]. The uncertainty in these
4
papers concerns what is observed as the system evolves,
whereas we focus on uncertainty in the parameters gov-
erning that evolution. Similarly, some models consider
either spatial [8] or temporal [9] variation of temperature
and other parameters, but they assume that this evolu-
tion is known. In contrast, we assume that αis fixed
throughout the interval, but to an unknown value.
Probably the closest research to what we consider in
this paper is sometimes called superstatistics. It has long
been known that an average over Gibbs distributions can-
not be written as some single Gibbs distribution (Thm. 1
in [10]). This means that even equilibrium statistical
physics must be modified when there is uncertainty in the
temperature of a system. The analysis of these modifica-
tions was begun by Beck and Cohen [11], who developed
an effective theory for thermodynamics with temperature
fluctuating in time. They considered a system coupled
to a bath, which is in a local equilibrium under the slow
evolution of the temperature of the bath. The main as-
sumption they exploit is scale-separation: while for short
time scales, the distribution over states of the system is
an equilibrium, canonical distribution with inverse tem-
perature β, the long-scale behavior is determined by a su-
perposition of canonical distributions with some distribu-
tion of temperatures f(β). The resulting superstatistical
distribution p(E) = Rdβf(β) exp(βE)/Z(β)was later
identified with the distribution corresponding to gener-
alized entropic functionals [12, 13] due to the fact that
particular generalized entropic functionals are maximized
by the same distribution that can be obtained the su-
perposition of the canonical distribution with given f(β)
[14, 15].
Later interpretations of superstatistics are not based
on the notion of local equilibria but rather on the
Bayesian approach to systems with uncertain tempera-
ture [16, 17]. These are conceptually closer to the focus
of this paper, which focuses on off-equilibrium systems
that are evolving quickly on the scale of the coupling
with the thermal reservoirs, and so cannot be modeled in
terms of time-scale separation.
Similar to the quasi-equilibrium scenarios considered
in superstatistics, other research has focused on deriving
an effective description of the system in local equilibrium
averaged over uncertain thermodynamic parameters. In
particular, this is the basis of a very rich and well-studied
approach to analyzing spin glasses [18, 19], in which
the coupling constants Jij in the spin-glass Hamiltonian
H=P(ij)Jij sisj, are random variables drawn from a
given distribution p(Jij ). Given such a distribution, the
famous replica trick ln Z= limn0Zn1
n[20] can be used
to calculate the Helmholtz free energy, averaged over all
Jij . Let us note that in the terminology used in disor-
dered systems, the annealed disorder corresponds to the
effective scenario while quenched disorder corresponds to
the phenomenological scenario.
Finally, several authors [21–24] investigated the case
where the initial distribution differs from the one that
would minimize EP. It describes the situation when the
system is designed by a scientist who was mistaken in
their assumption concerning the initial distribution. In
this case, the choice of the non-optimal solution gener-
ates the extra entropy production that can be described
by the so-called mismatch cost. These papers do not in-
volve a distribution different initial distribution that is
re-sampled each time the experiment is re-run.
Roadmap
One of the major themes of our investigation is that
some of the details of how an experiment is conducted
that experimenters currently do not consider in fact have
major effects on the precise forms of various thermody-
namic quantities. This is reflected in the difference be-
tween (the thermodynamics of) the effective and phe-
nomenological scenarios, discussed above. Even within
the effective scenario though, there are some important
distinctions between different ways of running the exper-
iment (and so different ways of defining thermodynamic
quantities). In particular, there is a major effect on the
thermodynamics itself that arises from whether the ex-
perimenter the protocol (time-dependent trajectory of
Hamiltonians of the system) changes from one run of an
experiment to the next, or instead is fixed in all runs.
We call these the “unadapted” and “adapted” situations,
respectively. We start in Section II with a simple illustra-
tive example of these two situations, involving a moving
optical tweezer with uncertain stiffness parameter.
We then begin our more general analysis. First, in
Section III, we introduce the necessary notation and
briefly recall the main results of traditional, full-certainty
stochastic thermodynamics. In Section IV, we present
the general form that stochastic thermodynamics takes in
the effective scenario (recall the discussion of the effective
and phenomenological scenarios in the introduction). We
begin by noting that the evolution of the effective proba-
bility distribution is not Markovian. Then we derive the
forms of the first and second laws of thermodynamics
for effective thermodynamic quantities. Next we discuss
the relation between effective EP and effective dissipated
work. We illustrate this discussion with the numerical
example of a fermionic bit erasure with uncertain tem-
perature. We end this section by investigating the special
case where the only uncertainty concerns the initial dis-
tribution, calculating the associated effective mismatch
cost.
In Section VI, we focus on (feedback) control proto-
cols for uncertain apparatuses. In contrast to the con-
ventional case where the apparatus is precisely known,
we assume we cannot tailor the protocol for each (un-
certain) apparatus separately, but instead must use the
same protocol for all apparatuses. We use this setting to
investigate how apparatus uncertainty affects a founda-
tional concern of stochastic thermodynamics: How much
work can be extracted from a system during a process
that takes it from a given initial distribution to a given
5
target distribution.
First, we consider this issue when we are uncertain
both about the initial distribution (though not the fi-
nal one) and about the temperature of the system as it
evolves. We focus on how that uncertainty changes the
results of the standard analysis of this issue, in which we
suppose a {quench; equilibrate; semi-statically-evolve}
process is applied to the system immediately after the
initial distribution is generated.
Next, we use this analysis to consider how uncertainty
affects the “thermodynamic value of information” to a
feedback controller [25, 26]. We restrict attention to
the special case of the analysis where the temperature
is known exactly, so the only uncertainty is in the ini-
tial distribution. We also suppose that there is a (per-
fectly known) delay between when the initial distribution
is generated, ti, and the time τwhen the {quench; equi-
librate; semi-statically-evolve} process can begin, during
which time the system evolves according to a (perfectly
known) rate matrix. In particular, we derive expressions
for how the thermodynamic value of information varies
with the length of the delay.
In Section VII, we investigate the ensemble entropy
production calculated from effective trajectory probabil-
ities, i.e., from trajectory probabilities given by averag-
ing over apparatuses. We call this the phenomenological
(ensemble) EP. We begin by proving that phenomeno-
logical ensemble EP is a lower bound on the average over
apparatuses of the effective ensemble EP. So fixing the
apparatus and averaging over the trajectories — though
without knowing what value the apparatus is fixed to —
and then averaging over apparatuses increases EP, com-
pared to the case where we average apparatuses before
averaging trajectories.
The difference between effective EP and phenomeno-
logical EP is called likelihood EP. It measures the differ-
ence between log-likelihood functions estimated from the
forward and time-reversed trajectories.
Considering trajectory versions of all three EPs, we
establish three detailed fluctuation theorems (DFT). In
addition to the well-studied DFT in the literature which
concerns a single, known apparatus, we establish the
DFT for the phenomenological EP and for likelihood EP.
The former represents the effective irreversibility of the
system by coarse-graining all the apparatuses. The lat-
ter represents how irreversibility affects the estimation of
the apparatus’ parameters when estimated by observing
the forward and time-reversed trajectories. These results
are illustrated by a simple example of a two-state system
coupled to one heat reservoir with uncertain tempera-
ture.
The paper ends with a discussion section in which we
describe just a few of the myriad directions for future
work.
II. ILLUSTRATIVE EXAMPLE
In this section we illustrate the importance of account-
ing for the uncertainty of the system parameters in an
experiment, with a simple example of a colloidal particle
in a moving laser trap. The dynamics of the particle is
given by the overdamped Langevin equation
˙x=µV
x +ξ
where ξis the white noise, and Vis the potential. Let us
consider that the particle is dragged by an optical tweezer
with the harmonic potential
Vk(x, t) = k
2(xλ(t))2
where kis the stiffness parameter and λ(t)is the control
protocol. The average work is given by the Sekimoto
formula
W[λ(t)] = Ztf
0
dt˙
λVk(λ(t), x(t))
λ
where h..iis the ensemble average. Let us consider µ= 1.
Our aim is to move the trap from λi= 0 at time ti= 0 to
λfat time tfsuch that the average work is minimal. Fol-
lowing [27], it is possible to express the optimal protocol
starting that minimizes the average work as
λ?
k=λf(1 + kt)
2 + ktf
and the corresponding optimal work as
W?
k=kλ2
f
2 + ktf
(3)
The complete derivation is done in Appendix A.
We focus on the realistic situation where the experi-
menter has to measure the stiffness parameter to be able
to determine the optimal protocol. The estimation is typ-
ically done by repeated measurement of k, which leads to
a histogram of k. In practice, often an experimenter will
implicitly assume that the uncertainty in kis due to the
measurement and takes the average value of stiffness ¯
kas
the single possible value. However, often the uncertainty
in the parameters can have a physical reason, e.g., impre-
cise calibration of the laser. In those kinds of scenarios
the stiffness can change for each run of the experiment,
and so the experimenter’s implicit assumption is invalid.
Write the stiffness parameter that the experimenter
uses to set up the control protocol as k, with the real
stiffness parameter written as κ(which in general differs
from k). In Appendix A we show that the work can then
be expressed as
Wκ[λk(t)] = W?
k
+λ2
f
(2 + ktf)2κ2k2
κ+(kκ)2
κeκtf
摘要:

NonequilibriumthermodynamicsofuncertainstochasticprocessesJanKorbel1,2andDavidH.Wolpert3,2,4,5,61SectionforScienceofComplexSystems,CeMSIIS,MedicalUniversityofVienna,Spitalgasse23,1090Vienna,Austria*2ComplexityScienceHubVienna,JosefstädterStrasse39,1080Vienna,Austria3SantaFeInstitute,SantaFe,NM,USA„4...

展开>> 收起<<
Nonequilibrium thermodynamics of uncertain stochastic processes Jan Korbel1 2and David H. Wolpert3 2 4 5 6 1Section for Science of Complex Systems CeMSIIS Medical University of Vienna Spitalgasse 23 1090 Vienna Austria.pdf

共27页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:27 页 大小:1.64MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 27
客服
关注