Quadratic minimization from conjugate gradient to an adaptive Heavy-ball method with Polyak step-sizes Baptiste Goujaud Adrien Taylor Aymeric Dieuleveut

2025-04-26 0 0 627.64KB 10 页 10玖币
侵权投诉
Quadratic minimization: from conjugate gradient to an adaptive
Heavy-ball method with Polyak step-sizes
Baptiste Goujaud*
, Adrien Taylor
, Aymeric Dieuleveut
October 13, 2022
Abstract
In this work, we propose an adaptive variation on the classical Heavy-ball method for convex quadratic
minimization. The adaptivity crucially relies on so-called “Polyak step-sizes”, which consists in using the
knowledge of the optimal value of the optimization problem at hand instead of problem parameters such as a
few eigenvalues of the Hessian of the problem. This method happens to also be equivalent to a variation of the
classical conjugate gradient method, and thereby inherits many of its attractive features, including its finite-time
convergence, instance optimality, and its worst-case convergence rates.
The classical gradient method with Polyak step-sizes is known to behave very well in situations in which it
can be used, and the question of whether incorporating momentum in this method is possible and can improve
the method itself appeared to be open. We provide a definitive answer to this question for minimizing convex
quadratic functions, a arguably necessary first step for developing such methods in more general setups.
1 Introduction
Consider the convex quadratic minimization problem in the form
min
xRdf(x),1
2hx, Hxi+hh, xi,1
2hxx?, H(xx?)i+f?(1)
where
H<0
is a symmetric positive semi-definite matrix, and we denote
f?
the minimum value of
f
(a few
instances of such problems are presented in Section 3). In the context of large-scale optimization (i.e.
d1
),
we are often interested in using first-order iterative methods for solving eq. (1). There are many known and
celebrated iterative methods for solving such quadratic problems, including conjugate gradient, Heavy-ball
methods (a.k.a., Polyak momentum), Chebyshev methods, and gradient descent. Each of those methods having
different specifications, the choice of the method largely depends on the application at hand. In particular, a
typical drawback of momentum-based methods is that they generally require the knowledge of some problem
parameters (such as extreme values of the spectrum of
H
). This problem is typically not as critical for simpler
gradient descent schemes with no momentum, although it generally still requires some knowledge on problem
parameters if we want to avoid using linesearch-based strategies. This limitation motivates the search for adaptive
strategies, fixing step-size using past observations about the problem at hand. In the context of (sub)gradient
descent, a famous adaptive strategy is the so-called Polyak step-size, which relies on the knowledge of the
optimal value f?:
xt+1 =xtγtf(xt),with γt=f(xt)f?
k∇f(xt)k2.(2)
Polyak steps were originally proposed in Polyak [1987] for nonsmooth convex minimization; it is also discussed
in Boyd et al. [2003] and a few variants are proposed by, e.g., Barré et al. [2020], Loizou et al. [2021], Gower
et al. [2022] including for stochastic minimization. In terms of speed, this strategy (and variants) enjoy similar
theoretical convergence properties as those for gradient descent. This methods appears to perform quite well
in applications where
f?
can be efficiently estimated—see, e.g., Hazan and Kakade [2019] for an adaptation
of the method for estimating it online (although Hazan and Kakade [2019] contains a number of mistakes, the
*CMAP, École Polytechnique, Institut Polytechnique de Paris. baptiste.goujaud@polytechnique.edu.
INRIA, École Normale Supérieure, CNRS, PSL Research University, Paris. adrien.taylor@inria.fr.
CMAP, École Polytechnique, Institut Polytechnique de Paris. aymeric.dieuleveut@polytechnique.edu.
1
arXiv:2210.06367v1 [math.OC] 12 Oct 2022
approach can be corrected to achieve the claimed target). Therefore, a remaining open question in this context is
whether the performances of this method can be improved by incorporating momentum in it. A first answer to
this question was provided by Barré et al. [2020], although it is not clear that it can match the same convergence
properties as optimal first-order methods.
In this work, we answer this question for the class of quadratic problems. In short, it turns out that the following
conjugate gradient-like iterative procedure
xt+1 = argmin
xkxx?k2s.t. xx0+ span{∇f(x0),f(x1),...,f(xt)},(3)
can be rewritten exactly as an Heavy-ball type method whose parameters are chosen adaptively using the value
of
f?
. This might come as a surprise, as the iteration eq. (3) might seem impractical due to its formulation relying
on the knowledge of x?. More precisely, eq. (3) is exactly equivalent to:
xt+1 =xt(1 + mt)×htf(xt) + mt(xtxt1),(4)
with parameters
t>0, ht,2(f(xt)f?)
k∇f(xt)k2,(5)
m0,0and t>1, mt,(f(xt)f?)h∇f(xt),f(xt1)i
(f(xt1)f?)k∇f(xt)k2+ (f(xt)f?)h∇f(xt),f(xt1)i.(6)
In eq. (4),
mt
corresponds to the momentum coefficient and
ht
to a step-size. With the tuning of section 1, this
step-size is twice the Polyak step-size in eq. (2). This Heavy-ball momentum method with Polyak step-sizes
is summarized in Algorithm 1and illustrated in Figure 1. Due to its equivalence with eq. (3), the Heavy-ball
method eq. (4) inherits nice advantageous properties of conjugate gradient-type methods, including:
(i) finite-time convergence: the problem eq. (1) is solved exactly after at most diterations,
(ii)
instance optimality: for all
H<0
, no first-order method satisfying
xt+1 x0+span{∇f(x0),...,f(xt)}
results in a smaller kxtx?k,
(iii) it inherits optimal worst-case convergence rates on quadratic functions.
Of course, a few of those points needs to be nuanced in practice due to finite precision arithmetic. The equivalence
between eq. (3) and eq. (4) is formally stated in the following theorem.
Theorem 1.1
Let
(xt)tN
be the sequence defined by the recursion eq. (3), namely such that for any
t
,
xt+1
is the Euclidean projection of
x?
onto the affine subspace
x0+ span{∇f(x0),f(x1),...,f(xt)}
. Then
(xt)tNis the sequence generated by Algorithm 1.
Algorithm 1 Adaptive Heavy-ball algorithm
Input Tand f:x7→ f(x),1
2hxx?, H(xx?)i+f?
Initialize x0Rd,m0= 0
for t= 0 ···T1do
ht=2(f(xt)f?)
k∇f(xt)k2
xt+1 =xt(1 + mt)×htf(xt) + mt(xtxt1)
mt+1 =(f(xt+1)f?)h∇f(xt+1),f(xt)i
(f(xt)f?)k∇f(xt+1)k2+(f(xt+1)f?)h∇f(xt+1),f(xt)i
end
Result: xT
Theorem 1.1 turns out to be a particular case of a more general result stating that the iterates of any conjugate
gradient-type method described with a polynomial Qas
xt+1 = argminx{hxx?, Q(H)(xx?)is.t. xx0+ span{∇f(x0),...,f(xt)}},(Q-minimization)
are equivalently written in terms of an adaptive Heavy-ball iteration. In particular, eq. (3) corresponds to eq. (Q-
minimization) with
Q(x) = 1
. Similarly, classical conjugate gradient method corresponds to eq. (Q-minimization)
with
Q(x) = x
(this fact is quite famous, see, e.g., Nocedal and Wright [1999]). We were surprised not to find
this general result written as is in the literature, and we therefore provide it in Section 2. The key point of this
work is that the equivalent Heavy-ball reformulation of eq. (3) can be written in terms of
f?
, thereby obtaining a
momentum-based Polyak step-size.
2
摘要:

Quadraticminimization:fromconjugategradienttoanadaptiveHeavy-ballmethodwithPolyakstep-sizesBaptisteGoujaud*,AdrienTaylor†,AymericDieuleveut‡October13,2022AbstractInthiswork,weproposeanadaptivevariationontheclassicalHeavy-ballmethodforconvexquadraticminimization.Theadaptivitycruciallyreliesonso-calle...

展开>> 收起<<
Quadratic minimization from conjugate gradient to an adaptive Heavy-ball method with Polyak step-sizes Baptiste Goujaud Adrien Taylor Aymeric Dieuleveut.pdf

共10页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:10 页 大小:627.64KB 格式:PDF 时间:2025-04-26

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 10
客服
关注