A general eciency relation for molecular machines Milo M. Lin Green Center for Systems Biology Department of Bioinformatics Department of Biophysics and the Center for

2025-05-01 0 0 1.06MB 11 页 10玖币
侵权投诉
A general efficiency relation for molecular machines
Milo M. Lin
Green Center for Systems Biology, Department of Bioinformatics, Department of Biophysics, and the Center for
Alzheimer’s and Neurodegenerative Diseases, University of Texas Southwestern Medical Center, Dallas, TX 75390
Milo.Lin@UTSouthwestern.edu
Living systems efficiently use chemical fuel to do work, process information, and assemble patterns
despite thermal noise. Whether high efficiency arises from general principles or specific fine-
tuning is unknown. Here, applying a recent mapping from nonequilibrium systems to battery-
resistor circuits, I derive an analytic expression for the efficiency of any dissipative molecular
machine driven by one or a series of chemical potential differences. This expression disentangles
the chemical potential from the machine’s details, whose effect on the efficiency is fully specified by
a constant called the load resistance. The efficiency passes through a switch-like inflection point
if the balance between chemical potential and load resistance exceeds thermal noise. Therefore,
dissipative chemical engines qualitatively differ from heat engines, which lack threshold behavior.
This explains all-or-none dynein stepping with increasing ATP concentration observed in single-
molecule experiments. These results indicate that biomolecular energy transduction is efficient
not because of idosyncratic optimization of the biomolecules themselves, but rather because the
concentration of chemical fuel is kept above a threshold level within cells.
Energy is a limiting resource for life. Therefore, molecular machines that harness differences in chemical
potential ∆µto perform tasks within the cell must function as highly efficient chemical engines. Despite
thermal stochasticity, the measured efficiencies of biomolecular chemical engines are consistently in the 60-to-
90 percent range (1–3), well above that of typical human-designed systems such as heat engines. Two centuries
ago, Carnot showed that the maximum efficiency of heat engines is (4):
ηheat T
T+T0
.(1)
This relation depends on two independent parameters: the temperature differential driving the engine ∆Tand
the exhaust temperature T0. Eq. 1 provides a universal constraint for all heat engines regardless of design
specifics, and initiated the field of thermodynamics. Finding a general relation constraining the behavior
of chemical engines would provide a unifying framework for biomolecular function and evolution, and for
engineering nanodevices for physiological conditions.
It is important to distinguish two broad classes of chemical engines in biology, as they should have different
formulations of efficiency as well as design constraints. The first is the class of energy transduction machines,
such as ion pumps, that convert energy from one chemical reservoir to another. In these cases, it is clear that
any energy dissipated by the machine is wasted and so should be minimized in order to maximize the efficiency
of transferring energy between the reservoirs, which is achieved in the ”tight-coupling” limit (5).
This work is primarily concerned with the second class of chemical engines whose purpose is the dissipation
of the input energy. This class of dissipative machines serve a wide range of functions, including mechanical
transport against viscous drag, and maintaining patterns of protein assembly or signaling configurations that
are inaccessible at equilibrium (Fig. 1A). They are therefore principal agents of information processing and
structural reorganization within the cell. Prigogine called these configurations ”dissipative structures” because
input energy is constantly dissipated to the environment as heat to maintain these patterns, which are far
from equilibrium even under steady state conditions (6). Chemical engines can approach perfect efficiency if
the throughput and power approach zero (5; 7). Therefore, unlike Carnot’s relation Eq.1, a useful general
relation for the efficiency of chemical engines must depend on at least one system-dependent collective variable
that reflects the constraint on throughput in the high efficiency limit, and reveal if there are regimes of the
collective variable and ∆µfor which the machine is both fast and efficient. Therefore, such a relation could
also reveal if achieving high performance in both metrics requires evolutionary fine-tuning, or is a general
property. We currently lack mechanistic insight into how chemical engine efficiency is controlled by tunable
parameters. And it is unclear if the multitude of system-specific parameters could be encapsulated by a single
collective variable that is well defined for all systems, which is necessary for a clear unified understanding.
Existing results on dissipative chemical engines, based on generalized fluctuation-dissipation relations
(8; 9), upper-bound the efficiency relative to observed fluctuations (10; 11), although the bound may not
be tight (achievable) far from equilibrium. Existing work does provide efficiency constraints under limiting
conditions. For example, weakly driven engines operating at maximum power must be 50% efficient (12–
14). However, this bound only holds near equilibrium, and does not apply to the strongly-driven conditions
relevant for biology. Numerical simulations of simple chemical engines indicate that efficiency is increased
when driven farther from equilibrium by increasing ∆µ, similar to the effect of increasing ∆Tin heat engines
(12). This is consistent with a model in which molecular motors can become less wasteful if driven by higher
arXiv:2210.04380v1 [physics.bio-ph] 10 Oct 2022
2
FIG. 1 Mapping nonequilibrium cycles to battery-resistor circuits. Cellular function and homeostasis are maintained by
cyclic molecular engines that harness chemical energy to transform and organize matter (simplified schematic of a cell in A;
energy flow in red). The circuit mapping transforms a dynamical system, for example an enzymatic engine that activates a
signaling molecule using energy released from hydrolyzing ATP into ADP and phosphate (B; star denotes activated enzyme
complex), into an electronic circuit consisting of batteries and resistors that obey Ohm’s law (C). Subsequently, the circuit
elements can be systematically simplified to describe coarse-grained probabilities and currents without loss of accuracy (D).
ATP concentration (15), and suggest a general link between large ∆µand high efficiency.
Here, using a recent generalization of the Boltzmann distribution to nonequilibrium systems(16), I derive
a general equation for the maximum efficiency of dissipative chemical engines. This relation depends on
two independent tunable variables: µand the relative load resistance. The load resistance is a collective
variable condensing the engine details, and is a constant unaffected by how strongly the engine is driven (i.e.
µ). The equation shows generally that biomolecular processes are intrinsically efficient because ∆µis much
larger than the thermal energy kT in vivo. However, unlike heat engines, chemical engines have an efficiency
inflection point at a threshold ∆µ/kT that depends on the load resistance. This switch-like transition explains
single-molecule in vitro measurements of dynein molecular motor stepping. This result also leads to a general
tradeoff relation between energy efficiency and time efficiency. More broadly, the existence of a general
efficiency inflection point provides a candidate thermodynamic necessary condition for life.
Circuit mapping. Chemical engines within the cell (circular arrows in Fig. 1A) consume energy to
form patterns of activated signaling molecules, protein self-assembly, or directed movement, that would
be vanishingly rare at equilibrium. Such processes can be modeled as nonequilibrium Markov chains, but
obtaining analytical insights into such systems has mostly been intractable (17–19). This work exploits a
recently found mapping from the dynamical network of any Markovian system to an electronic circuit (16)
(See, for example, Fig. 1C,D). Consequently, a system is decomposed into a passive equilibrium portion
(”resistors”) that is driven by chemical potential differences ∆µ(”batteries”). The resistance between two
neighboring states mand nis: Rmn eβGm
kmn , where β=1
kT ,Gmis the free energy of state mat equilibrium
(i.e. in the absence of driving), and kmn is the equilibrium rate constant of transitioning from state mto n. If
a transition is directly driven by ∆µ, this transition is mapped to a battery with a probability potential drop
Emn (eβµ1)eβGmPm(red arrow in Fig. 1B). Using this mapping, probability flow amongst the states
of any system obeys Ohm’s law (16): PjeβGjPieβGi=
n=j
P
m=i
(Emn RmnImn) (16), where the summation
is along any trajectory (parameterized by neighboring states m, n) connecting states iand j.Imn is the
probability current (net flux) from state mto state n. Note that Emn and Imn are zero for a system at
equilibrium, and Ohm’s law reduces to the Boltzmann distribution Pj=Pieβ(GiGj)in this case, as expected.
In this mapping, multiple interconnected dynamical transitions can be systematically combined into a single
resistor, called the Thevenin equivalent resistance (20), that captures their collective effect on the probability
flux at steady state. For example, for the signaling protein in Fig.1B, molecular transitions corresponding
to R1, .., R6(Fig. 1C) can be combined as series and parallel resistors into the equivalent resistance
RL=R1+ ( 1
R2+1
R3+R4+R5+R6)1(Fig. 1D). Resistor coarse-graining, combined with the broad set of circuit
theorems, allows a systematic approach to simplify systems of otherwise intractable complexity (16). This ap-
proach is most powerful for systems, such as biomolecular machines, for which the driven transitions are sparse.
Defining the efficiency of dissipative chemical engines. Dissipative machines within the cell are
usually composed of proteins that occupy one of many possible conformation and interaction states (Fig. 1B).
Each machine harnesses a source of chemical energy, often by binding excess ATP and coupling its hydrolysis
3
into ADP and phosphate to drive the target reaction. Consider the circuit representation of a machine driven
by the source transition S (i.e. battery) with ∆µ=kT ln 1 + αS
kS, where kSis the apparent rate constant of
the source transition in the absence of driving (e.g. no excess ATP at equilibrium), and αSis the enhancement
in the rate constant due to driving. If the energy source is ATP, the source transition is enhancement of
the rate of converting unbound phosphate and ADP into ATP bound to the protein (red arrow in Fig. 1B).
At ATP concentrations well below binding saturation, αSis proportional to ATP concentration in excess
of the equilibrium concentration. The resistance of the source transition (i.e. battery) is denoted RS(Fig.
2A). A machine may contain multiple source transitions. Although the circuit framework is compatible with
arbitrary arrangements of batteries, the analytic results in this work are restricted to those machines with a
single source transition or multiple source transitions in series. Nevertheless, this class of machines includes a
large fraction of cellular processes including signaling, self-assembly and sorting, and mechanical transport.
The source transition breaks detailed balance for the machine by driving net flux of downstream transitions
between states, and this flux may take numerous alternate pathways through state space. All transitions aside
from the source transition can be combined into a single Thevenin equivalent load resistor RL(Fig. 2A).
Often, the biologically useful output of the engine is to enhance a particular state within the load.
With this formulation, we can now define the maximum efficiency of a dissipative machine as the
dissipation rate of the load divided by the total dissipation rate. This definition corresponds to existing
formulations of efficiency as special cases. For example, in the case that the machine is a molecular motor
transporting a cargo against viscous drag, the load dissipation coincides with the maximum mechanical work
that can be done by the motor per cycle. In this case, the efficiency is equivalent to the Stokes efficiency,
which is the average viscous drag times the average velocity divided by the total dissipation rate (21). More
broadly, this definition also extends the concept of efficiency to most processes in the cell, such as signaling
and pattern formation, in which the useful dissipation of energy does not correspond to work along a simple
reaction coordinate.
A general efficiency relation for dissipative chemical engines. We first obtain the total dissi-
pation rate σT , where σis the entropy production rate. The nonequilibrium fluctuation theorems (22–24)
relate dissipation rate to transition probabilities: σT =kT Pij Iij ln "Pi(kij +αij )
Pjkji #(18). Because dissipation
is summed over all possible dynamical transitions of a system, it may appear that knowing an engine’s details
is required to calculate the efficiency. Instead, I show here that RLis the only information about the load
needed to calculate the dissipation, and therefore the efficiency. First, using Tellegen’s Circuit Theorem (25),
the total steady-state dissipation rate is (See Supp. Information):
σT =X
ij
Iij µij .(2)
In this reformulation, the total dissipation rate is a function of only the currents of the directly driven transitions
(for which αij 6= 0), even though energy is dissipated at all transitions. Eq. 2 is especially useful if driven
transitions are sparse.
If there is a single driven transition, the steady-state load dissipation rate is (Supp. Information): σLT=
kT ISln RS+RLeµ/kT
RS+RL. The difference between σT and σLTis the dissipation of the source (battery), which
corresponds to energy that is not accessible to the load. The efficiency is upper-bounded because of the
dissipation of the source, which corresponds to the energy lost whenever the engine dynamics happens to
reverse the source transition, for example, if bound ATP is transformed into unbound ADP and phosphate
without coupling to any downstream load reactions. The maximum steady state efficiency is: ηchem σL.
This yields the main result of this work, a general relation governing the maximum efficiency of dissipative
chemical engines:
ηchem kT
µln "RS
RL+eµ
kT
RS
RL+ 1 #.(3)
The efficiency depends on the chemical potential difference relative to kT , which quantifies how the efficiency
is limited by thermal fluctuations. The functional form of Eq. 3 remains valid in the case that the engine is
cumulatively powered by multiple driven transitions (e.g. sequential hydrolysis of multiple ATP molecules);
in this general case, ∆µis replaced by the sum of ∆µ’s and RSis replaced by the weighted sum of all driven
resistors (See Supp. Information). An engine will approach the maximum efficiency set by Eq. 3 if the most
dissipative reactions within the load are useful. In practice, the load may include nonproductive futile cycles
which dissipate energy subsequent to hydrolysis.
摘要:

AgeneraleciencyrelationformolecularmachinesMiloM.LinGreenCenterforSystemsBiology,DepartmentofBioinformatics,DepartmentofBiophysics,andtheCenterforAlzheimer'sandNeurodegenerativeDiseases,UniversityofTexasSouthwesternMedicalCenter,Dallas,TX75390Milo.Lin@UTSouthwestern.eduLivingsystemsecientlyusechem...

展开>> 收起<<
A general eciency relation for molecular machines Milo M. Lin Green Center for Systems Biology Department of Bioinformatics Department of Biophysics and the Center for.pdf

共11页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:11 页 大小:1.06MB 格式:PDF 时间:2025-05-01

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 11
客服
关注