The 1 λλ Global SEMO Algorithm Benjamin DoerrOmar El HadriAdrien Pinard October 10 2022

2025-05-06 1 0 499.22KB 20 页 10玖币
侵权投诉
The (1 + (λ, λ)) Global SEMO Algorithm
Benjamin DoerrOmar El HadriAdrien Pinard§
October 10, 2022
Abstract
The (1 + (λ, λ)) genetic algorithm is a recently proposed single-objective evolu-
tionary algorithm with several interesting properties. We show that its main work-
ing principle, mutation with a high rate and crossover as repair mechanism, can
be transported also to multi-objective evolutionary computation. We define the
(1 + (λ, λ)) global SEMO algorithm, a variant of the classic global SEMO algo-
rithm, and prove that it optimizes the OneMinMax benchmark asymptotically
faster than the global SEMO. Following the single-objective example, we design a
one-fifth rule inspired dynamic parameter setting (to the best of our knowledge for
the first time in discrete multi-objective optimization) and prove that it further
improves the runtime to O(n2), whereas the best runtime guarantee for the global
SEMO is only O(n2log n).
1 Introduction
The theory of evolutionary algorithms (EAs) for a long time has accompanied our at-
tempts to understand the working principles of evolutionary computation [NW10, AD11,
Jan13, DN20]. In the recent years, this field has not only explained existing approaches,
but also proposed new operators and algorithms.
The theory of multi-objective EAs, due to the higher complexity of these algorithms,
is still lagging behind its single-objective counterpart. There are several runtime analyses
for various multi-objective EAs which explain their working principles. Also, some new
ideas specific to multi-objective evolutionary algorithms (MOEAs) have been developed
recently. However, many recent developments in single-objective EA theory have not been
exploited in multi-objective evolutionary computation (see Section 2 for more details).
In this work, we try to profit in multi-objective evolutionary computation from the
ideas underlying the (1 + (λ, λ)) GA. The (1 + (λ, λ)) GA, proposed first in [DDE15],
tries to combine a larger radius of exploration with traditional greedy-style exploitation
of already detected profitable solutions. To this end, the (1 + (λ, λ)) GA uses mutation
Author-generated version of [DHP22]
Laboratoire d’Informatique (LIX), CNRS, ´
Ecole Polytechnique, Institut Polytechnique de Paris,
Palaiseau, France
´
Ecole Polytechnique,Institut Polytechnique de Paris, Palaiseau, France
§´
Ecole Polytechnique,Institut Polytechnique de Paris, Palaiseau, France
1
arXiv:2210.03618v1 [cs.NE] 7 Oct 2022
with a higher-than-usual mutation rate together with a biased crossover with the parent,
which can repair the unwanted effects of the aggressive mutation operations.
We defer the detailed discussion of this algorithm to Section 4 and note here only
that several results indicate that this basic idea was successful. In the early works on
the OneMax benchmark, the (1 + (λ, λ)) GA was shown to have a better runtime than
the (1 + 1) EA for a decent range of parameters. With the optimal parameter setting,
this runtime becomes Θnlog(n) log log log(n)
log log(n)1/2. While this is not a radical improvement
over the Θ(nlog n) runtime of many classic EAs, it is still noteworthy in particular
when recalling that no black-box algorithm can optimize this benchmark faster than in
time Θ(n/ log n) [DJW06]. With a suitable fitness-dependent parameter setting or a self-
adjusting parameter choice following a 1/5-rule, the runtime of the (1 + (λ, λ)) GA can
further be lowered to O(n) [DD18]. Similar results have been obtained for random satisfia-
bility instances in a planted solution model [BD17]. Together with a heavy-tailed choice of
the parameters, the (1 + (λ, λ)) GA can also obtain a linear runtime [ABD20a]. Stronger
runtime improvements over many classic algorithms have been obtained on the Jump
benchmark, see [ABD21] and the references therein. Since here the right choice of the pa-
rameters is less understood and since a multi-objective analogue of the Jump-benchmark
has been proposed [DZ21] only very recently, in this first work on multi-objective ver-
sions of the (1 + (λ, λ)) GA we shall concentrated on the more established OneMinMax
problem, which is a multi-objective analogue of the classic OneMax benchmark.
Our results: We develop a multi-objective version of the (1 + (λ, λ)) GA by inter-
preting the main loop of the (1 + (λ, λ)) GA as a complex mutation operator and then
equipping the classic global simple evolutionary multi-objective optimizer (GSEMO) with
this mutation operator. For this algorithm, which we call (1 + (λ, λ)) GSEMO, we con-
duct a mathematical runtime analysis on the OneMinMax benchmark. We show that a
reasonable range of static parameter settings give a better runtime than the O(n2log n)
guarantee known for the classic GSEMO. With an optimal parameter choice we obtain a
runtime of O(n2log n). With a suitable state-dependent parameter choice comparable
to the one of [DDE15], the runtime drops further to O(n2). Since such a state-dependent
parameter choice requires a deep understanding of the algorithm and the problem, we
then design a self-adjusting parameter setting inspired by the classic one-fifth success
rule. Some adjustments are necessary to work in this multi-objective setting, but then we
obtain a simple dynamic parameter setting that gives the same O(n2) runtime as with
the complicated state-dependent parameter choice. To the best of our knowledge, this
is the first time that such a dynamic parameter choice is developed for a MOEA for a
discrete search space. From a broader perspective, this work shows that the main idea of
the (1 + (λ, λ)) GA, so far only used with (1 + 1)-type algorithms, can also be used in
more complex algorithmic frameworks.
This work is organized as follows. In the following section, we describe the previous
works most relevant to ours. In Section 3, we briefly recall the basic notations of MOEAs
and state the OneMinMax problem. We develop the (1 + (λ, λ)) GSEMO in Section 4
and conduct our mathematical runtime analysis for static parameters in Section 5. In
Section 6, we propose state-dependent parameters and prove the O(n2) runtime guarantee
for these. We develop and analyze the self-adjusting parameter choice inspired by the one-
fifth rule in Section 7. A short experimental evaluation in Section 8 shows that already
the (1 + (λ, λ)) GSEMO with static parameters easily outperforms the classic GSEMO
2
and this already for small problem sizes. We summarize our findings and discuss future
research ideas in Section 9.
2 Previous Work
Soon after the first runtime analyses of single-objective EAs have appeared, see,
e.g., [Rud97, GKS99, DJW02] for three early and influential works, also multi-objective
EAs were studied under this theory perspective. These works, e.g., [LTZ+02, Gie03,
LTZ04], followed the example of the theoretical works on single-objective EAs and proved
estimates on the runtime of simple MOEAs such as the SEMO and GSEMO on bench-
mark problems that were defined as multi-objective analogues of classic benchmarks like
OneMax or LeadingOnes.
In the recent past, the theory of MOEA has more concentrated on research top-
ics which are specific to multi-objective optimization, e.g., parent selection schemes
that speed up the exploration of the Pareto front [OGNS20], approximations of the
Pareto front [BFN08, DGN16], or specific MOEAs such as the MOEA/D or the NSGA-
II [LZZZ16, HZCH19, HZ20, ZLD22, ZD22, BQ22, DQ22]. While it is natural that the
theory of MOEAs has regarded these questions, at the same time this carries the risk
that trends and insights from the general EA theory are not exploited in multi-objective
evolutionary computation. In fact, this effect is already very visible. Topics such as
precise runtime analyses (see, e.g., [AD21a] and the references therein), fixed-budget
analysis [JZ14], and optimal parameter settings (see, e.g., [Wit13] for an early such result
in single-objective EA theory) have not been considered yet in multi-objective EA theory.
More critically, also the recently developed new algorithmic building blocks, for exam-
ple, a large number of successful ways to dynamically set parameters [DD20], memetic
algorithms with proven performance guarantees (see [NS20] and the references therein),
and hyperheuristic approaches with proven guarantees (see [LOW19] and the references
therein) have all not been considered for MOEAs. In fact, to the best our our knowledge,
the only example of a work transporting recent algorithmic ideas developed in single-
objective EA theory to the multi-objective world is [DZ21], where the fast mutation
of [DLMN17] and the stagnation detection of [RW20] are used in a MOEA.
For this reason, we shall study in this work how another algorithmic idea recently
proposed in EA theory can be used in multi-objective evolutionary computation, namely
the (1 + (λ, λ)) GA. We defer an account of the previous work on this algorithm to
Section 4, where this algorithm will be detailed.
3 Preliminaries
In this section, we give a brief introduction to multi-objective optimization and to the
notation we use.
For a, b R, we write [a..b] = {zZ|azb}for the integers in the interval
[a, b]. We denote the binomial distribution with parameters nand pby Bin(n, p) and we
write XBin(n, p) to denote that Xis a sample from this distribution, that is, that
Pr[X=i] = n
ipi(1 p)nifor all i[0..n].
3
For the ease of presentation, in the remainder we shall concentrate ourselves on two
objectives which have to be maximized. A bi-objective function on the search space Ω is
a pair f= (f1, f2), where each fi: Ω R. We write f(x)=(f1(x), f2(x)) for all xΩ.
We shall always assume that we have a bit-string representation, that is, Ω = {0,1}n
for some nN. The challenge in multi-objective optimization is that often there is no
solution xthat maximizes both f1and f2.
We say xweakly dominates y, denoted by xy, if and only if f1(x)f1(y) and
f2(x)f2(y). We say xstrictly dominates y, denoted by xy, if and only if f1(x)f1(y)
and f2(x)f2(y) and at least one of the inequalities is strict. We say that a solution is
Pareto-optimal if it is not strictly dominated by any other solution. The set of objective
values of all Pareto optima is called the Pareto front of f. In this work, as in many previous
works on the (G)SEMO family of algorithms, our aim is to compute the full Pareto front,
that is, to compute a set Pof Pareto optima such that f(P) = {f(x)|xP}is the
Pareto front.
In this work, we will mainly be interested in the OneMinMax benchmark function,
which is a bi-objective analogue of the classic OneMax benchmark. It is defined by
OneMinMax :{0,1}nR2;
x7→ n
X
i=1
xi,
n
X
i=1
(1 xi)!,
that is, the first objective counts the number of ones in xand the second objective counts
the number of zeros. We immediately see that any x∈ {0,1}nis Pareto optimal. Hence
the Pareto front of this problem is {(i, n i)|i[0..n]}.
4 From the (1 + (λ, λ)) GA to the (1 + (λ, λ)) GSEMO
In this section, we design a MOEA building on the main ideas of the (1 + (λ, λ)) GA.
We start with a brief description of the (1 + (λ, λ)) GA and the known results on this
algorithm, then move on to the GSEMO, and finally discuss how to merge the two.
4.1 The (1 + (λ, λ)) GA
The (1 + (λ, λ)) GA is a still simple genetic algorithm for the maximization of pseudo-
Boolean functions, that is, functions f:{0,1}nRdefined on the set of bit-strings of
length n. The (1 + (λ, λ)) GA works with a parent population of size one. Its parameters
are the offspring population size λN, the mutation rate p[0,1], usually parameterized
as p=k/n for some number k[0, n], and the crossover bias c[0,1]. In a first mutation
stage, from the parent individual λoffspring are created. Each offspring is distributed as
if obtained from bit-wise mutation with mutation rate p, however, to remedy the risk that
offspring are better or worse just because they have a different distance from the parent,
all offspring are created in the same distance. Consequently, first a number `is sampled
according to a binomial distribution with parameters nand pand then λoffspring are
generated each by flipping a random set of exactly `bits in the parent. In the intermediate
selection stage, an offspring with maximal fitness is selected (breaking ties randomly).
4
摘要:

The(1+(;))GlobalSEMOAlgorithm*BenjaminDoerr„OmarElHadri…AdrienPinard§October10,2022AbstractThe(1+(;))geneticalgorithmisarecentlyproposedsingle-objectiveevolu-tionaryalgorithmwithseveralinterestingproperties.Weshowthatitsmainwork-ingprinciple,mutationwithahighrateandcrossoverasrepairmechanism,can...

展开>> 收起<<
The 1 λλ Global SEMO Algorithm Benjamin DoerrOmar El HadriAdrien Pinard October 10 2022.pdf

共20页,预览4页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:20 页 大小:499.22KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 20
客服
关注