Enabling Quantum Speedup of Markov Chains using a Multi-level Approach Xiantao Li

2025-05-06 0 0 363.94KB 10 页 10玖币
侵权投诉
Enabling Quantum Speedup of Markov Chains using a Multi-level
Approach
Xiantao Li
The Pennsylvania State University, University Park, Pennsylvania, 16802, USA
Xiantao.Li@psu.edu
October 26, 2022
Abstract
Quantum speedup for mixing a Markov chain can be achieved based on the construction of a slowly-
varying rMarkov chains where the initial chain can be easily prepared and the spectral gaps have uniform
lower bound. The overall complexity is proportional to r. We present a multi-level approach to construct
such sequence of rMarkov chains by varying a resolution parameter h.We show that the density function
of a low-resolution Markov chain can be used to warm start the Markov chain with high resolution. We
prove that in terms of the chain length the new algorithm has O(1) complexity rather than O(r).
1 Introduction
Markov chains play a central role in modern data science algorithms [5], e.g., data assimilation and fore-
casting, uncertainty quantification, machine learning, stochastic optimization etc. One important scenario
is when the mixing properties of Markov chains are used to sample statistical quantifies of interest with
respect to the equilibrium density π. However, simulating a Markov chain for this purpose can be a compu-
tationally demanding task, due to the dimensionality and mixing time. The seminal works Aharonov et al.
and Szegedy [1, 19] put forward a paradigm to simulate Markov chains using quantum walks. A particular
emphasis in this line of works [21, 15, 12, 9, 22] has been placed on the speedup in terms of the spectral
gap δ, from the classical 1
δcomplexity to 1
δ, without the unfavorable dependence on π= minxπ(x). An
important approach for achieving this quadratic speedup is by constructing a sequence of Markov chains,
P0, P1,··· , Pr, for which the stationary distribution of two successive Markov chains are close [2, 21, 18].
Assuming that the initial chain P0can be easily mixed, the overall complexity is Or
δ,times the com-
plexity of implementing each quantum walk. This approach has been combined with the Chebyshev cooling
schedule, together with the quantum mean estimation to compute expected values and partition functions
[13], which yields the sampling complexity Or
δ. One important example of slowly varying Markov
chains is the Gibbs distribution parameterized by the inverse temperature, which is the primary mechanism
behind simulated annealing algorithms. Although the quadratic speedup has been envisioned to be a generic
property, explicit construction of the sequence of slowly varying Markov chains with uniformly bounded
spectral gaps is still an open issue.
This paper presents an alternative approach to construct multiple Markov chains. We consider the dis-
cretization of a Markov chain with continuous (infinite-dimensional) state space, which stem from general
deterministic or stochastic dynamical systems. The discretization, using Ulam-Galerkin projection [20],
lumps states into finitely many bins with bin size h. The novel aspect is that we can construct multiple
Markov chains Phby varying h. When his large, e.g., h=hmax, the state space of the Markov chain is
1
arXiv:2210.14088v1 [quant-ph] 25 Oct 2022
small, Phcan be mixed quickly using either a classical or a quantum algorithm. On the other hand, when
his small, e.g., h=hmin, the continuous Markov chain is well approximated by Ph. To some extent, this
removes the assumption in the framework of multiple Markov chains [21] on the preparability of the initial
chain. We show that by varying h, e.g., from 2hto hat each stage, the density functions of the Markov
chains at two successive levels have significant overlap, thus enabling a smooth transition. Our main finding
(Theorem 1) is that simulating the sequence of such multiple Markov chains {Ph}hmaxhhmin has a cost
that is comparable to simulating the Markov chain Phmin , as if it had a warm start.
Although our approach is constructed from a finite-dimensional approximation of a Markov chain with
infinite state space, the same methodology can be applied directly to certain finite Markov chains, especially
those that have been treated by multigrid methods [6].
Problem Setup. We consider a Markov chain {Xn}n0with state space S=Rdand the (right)
transition density K(x, y), with K(x, dy)indicating the probability that, given x, the Markov chain moves
to state yat the next step, which can be described in terms of the conditional expectation of any statistical
quantity f(·), i.e., E[f(Xn+1)|Xn] = RK(Xn, y)f(y)dy. An alternative description of the Markov chain
is the Chapman-Kolmogorov (CK) equation for the change of the probability density function (PDF) from
step nto n+ 1,
pn+1 =ZRd
pn(x)K(x, y)dx, (1)
which is convenient in the study of mixing properties. The relation in Eq. (1) is often written in a ma-
trix/vector multiplication form pT
n+1 =pT
nK. In particular, a stationary density p(x)is the left eigenvector
associated with the eigenvalue 1:pT=pTK.
Problem: Given a precision , find a finite-dimensional approximation ph(x)of the stationary PDF pwith
hindicating a numerical parameter, such that, kpphk1< .
Ulam-Galerkin projection. Many existing quantum algorithms work with a finite-dimensional form
of the CK equation (1). To approximate a Markov chain with continuous state space and implement the
algorithms on quantum computers, we have to quantize the problem. This is done in two steps: First we
consider a large domain D, where the probability of reaching states outside Dis negligible. This is a
reasonable assumption if we consider the states close to equilibrium: since the PDF integrates to 1, the
probability in the far field is negligible. For simplicity of the presentation, we choose D= [1,1]d. In the
second step, we can introduce a partition of Dwith uniform spacing h,(h:=1
Nfor some NN),
D=[
jN(h)
Dj(h), N(h) := nj:Njk< N, k[d]o.(2)
where the subdomain is given by,
Dj(h) =
d
O
k=1 jkh,(jk+ 1)h.(3)
With this partition, the continuous states have been grouped into small non-intersecting bins with bin
size hd, amounting to a finite state space,
Sh:={Dj(h)|jN(h)},|Sh|=2
hd
.(4)
For brevity, we simplify refer to one such state in Shby j.Associated with the partitions of the domain is a
discretization of a PDF. This can be done by integrating the PDF p(x)over each bin:
πh(j) = Zj(h)
p(x)dx. (5)
2
摘要:

EnablingQuantumSpeedupofMarkovChainsusingaMulti-levelApproachXiantaoLiThePennsylvaniaStateUniversity,UniversityPark,Pennsylvania,16802,USAXiantao.Li@psu.eduOctober26,2022AbstractQuantumspeedupformixingaMarkovchaincanbeachievedbasedontheconstructionofaslowly-varyingrMarkovchainswheretheinitialchainca...

展开>> 收起<<
Enabling Quantum Speedup of Markov Chains using a Multi-level Approach Xiantao Li.pdf

共10页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:10 页 大小:363.94KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 10
客服
关注