
with a higher-than-usual mutation rate together with a biased crossover with the parent,
which can repair the unwanted effects of the aggressive mutation operations.
We defer the detailed discussion of this algorithm to Section 4 and note here only
that several results indicate that this basic idea was successful. In the early works on
the OneMax benchmark, the (1 + (λ, λ)) GA was shown to have a better runtime than
the (1 + 1) EA for a decent range of parameters. With the optimal parameter setting,
this runtime becomes Θnlog(n) log log log(n)
log log(n)1/2. While this is not a radical improvement
over the Θ(nlog n) runtime of many classic EAs, it is still noteworthy in particular
when recalling that no black-box algorithm can optimize this benchmark faster than in
time Θ(n/ log n) [DJW06]. With a suitable fitness-dependent parameter setting or a self-
adjusting parameter choice following a 1/5-rule, the runtime of the (1 + (λ, λ)) GA can
further be lowered to O(n) [DD18]. Similar results have been obtained for random satisfia-
bility instances in a planted solution model [BD17]. Together with a heavy-tailed choice of
the parameters, the (1 + (λ, λ)) GA can also obtain a linear runtime [ABD20a]. Stronger
runtime improvements over many classic algorithms have been obtained on the Jump
benchmark, see [ABD21] and the references therein. Since here the right choice of the pa-
rameters is less understood and since a multi-objective analogue of the Jump-benchmark
has been proposed [DZ21] only very recently, in this first work on multi-objective ver-
sions of the (1 + (λ, λ)) GA we shall concentrated on the more established OneMinMax
problem, which is a multi-objective analogue of the classic OneMax benchmark.
Our results: We develop a multi-objective version of the (1 + (λ, λ)) GA by inter-
preting the main loop of the (1 + (λ, λ)) GA as a complex mutation operator and then
equipping the classic global simple evolutionary multi-objective optimizer (GSEMO) with
this mutation operator. For this algorithm, which we call (1 + (λ, λ)) GSEMO, we con-
duct a mathematical runtime analysis on the OneMinMax benchmark. We show that a
reasonable range of static parameter settings give a better runtime than the O(n2log n)
guarantee known for the classic GSEMO. With an optimal parameter choice we obtain a
runtime of O(n2√log n). With a suitable state-dependent parameter choice comparable
to the one of [DDE15], the runtime drops further to O(n2). Since such a state-dependent
parameter choice requires a deep understanding of the algorithm and the problem, we
then design a self-adjusting parameter setting inspired by the classic one-fifth success
rule. Some adjustments are necessary to work in this multi-objective setting, but then we
obtain a simple dynamic parameter setting that gives the same O(n2) runtime as with
the complicated state-dependent parameter choice. To the best of our knowledge, this
is the first time that such a dynamic parameter choice is developed for a MOEA for a
discrete search space. From a broader perspective, this work shows that the main idea of
the (1 + (λ, λ)) GA, so far only used with (1 + 1)-type algorithms, can also be used in
more complex algorithmic frameworks.
This work is organized as follows. In the following section, we describe the previous
works most relevant to ours. In Section 3, we briefly recall the basic notations of MOEAs
and state the OneMinMax problem. We develop the (1 + (λ, λ)) GSEMO in Section 4
and conduct our mathematical runtime analysis for static parameters in Section 5. In
Section 6, we propose state-dependent parameters and prove the O(n2) runtime guarantee
for these. We develop and analyze the self-adjusting parameter choice inspired by the one-
fifth rule in Section 7. A short experimental evaluation in Section 8 shows that already
the (1 + (λ, λ)) GSEMO with static parameters easily outperforms the classic GSEMO
2