Accelerating the Evolutionary Algorithms by Gaussian Process Regression with -greedy acquisition function

2025-04-24 0 0 352.91KB 11 页 10玖币
侵权投诉
Accelerating the Evolutionary Algorithms by
Gaussian Process Regression with -greedy
acquisition function
Rui Zhong1, Enzhi Zhang2, and Masaharu Munetomo3
1Graduate School of Information Science and Technology, Hokkaido University,
Sapporo, 060-0814, Japan,
rui.zhong.u5@elms.hokudai.ac.jp,
2Graduate School of Information Science and Technology, Hokkaido University,
Sapporo, 060-0814, Japan,
enzhi.zhang.n6@elms.hokudai.ac.jp,
3Information Initiative Center, Hokkaido University, Sapporo 060-811, Japan,
munetomo@iic.hokudai.ac.jp
Abstract. In this paper, we propose a novel method to estimate the
elite individual to accelerate the convergence of optimization. Inspired
by the Bayesian Optimization Algorithm (BOA), the Gaussian Process
Regression (GPR) is applied to approximate the fitness landscape of
original problems based on every generation of optimization. And simple
but efficient -greedy acquisition function is employed to find a promis-
ing solution in the surrogate model. Proximity Optimal Principle (POP)
states that well-performed solutions have a similar structure, and there
is a high probability of better solutions existing around the elite indi-
vidual. Based on this hypothesis, in each generation of optimization, we
replace the worst individual in Evolutionary Algorithms (EAs) with the
elite individual to participate in the evolution process. To illustrate the
scalability of our proposal, we combine our proposal with the Genetic Al-
gorithm (GA), Differential Evolution (DE), and CMA-ES. Experimental
results in CEC2013 benchmark functions show our proposal has a broad
prospect to estimate the elite individual and accelerate the convergence
of optimization.
Keywords: Evolutionary Algorithms (EAs), Proximity Optimal Prin-
ciple (POP), Gaussian Process Regression (GPR), -greedy acquisition
function
1 Introduction
Evolutionary Algorithms (EAs) generate the offspring by simulating natural be-
haviors, such as the selection, crossover, and mutation in Genetic Algorithm
(GA)[22], the movement of organisms in a bird flock or fish school in Particle
Swarm Optimization (PSO)[18]. Estimation of distribution algorithms (EDAs)
generate the offspring by building the posterior probabilistic distribution model
arXiv:2210.06814v1 [cs.NE] 13 Oct 2022
2 Rui Zhong, Enzhi Zhang, Masaharu Munetomo
and sampling depending on certain strategies[27]. Both of them have achieved
great success on complex optimization problems, such as combinatorial problems[9],
large-scale problems[24], constraint problems[23], and multi-objective problems[17].
However, conventional EAs are weak in learning the information of the fitness
landscape and transforming the information into knowledge[29], which means so-
lutions generated by crossover and mutation operators may be close to the par-
ents but far away from other promising solutions[27]. For EDAs, simple models
are easy to implement but insufficient for complicated problems[19]. A large
population is required to build a good multivariate model, and the learning
process (including network structure learning and parameter learning) can be
very time-consuming[6,28]. Thus, an efficient EA should take advantage of both
global distribution information and location information to produce promising
solutions.
In addition, many studies[1,8] reveal the importance of finding trusting re-
gions or elites in the fitness landscape, which can be fully applied to guide the
direction of evolution well. Murata et al[16]. first proposed that a mathematical
method could be used to calculate the global optimum using the information of
two subsequent generations. Yu et al[26]. proposed that in the differential evolu-
tion algorithm, the differential vector from the parent individual to its offspring
(or the vector from an individual with poor fitness to an individual with higher
fitness) is defined as the moving vector, and the convergence point is estimated
according to the moving vector. A famous hypothesis, Proximity Optimal Princi-
ple (POP)[21] states that well-performed solutions have a similar structure, and
there is a high probability of better solutions existing around the elite individual.
In this paper, we propose a novel method to estimate the elite individual in
evolution inspired by Bayesian Optimization Algorithm[7]. In each generation
of optimization, we build a posterior probabilistic model with Gaussian Process
Regression (GPR) and estimate the elite individual by maximizing the -greedy
acquisition function[5]. To illustrate the scalability of our proposal, we apply
our proposal combined with Differential Evolution (DE)[4], Genetic Algorithm
(GA)[25], and CMA-ES[2]. Numerical experiments show that the participation
of the elite individual can accelerate the convergence of optimization.
The remainder of this paper is organized as follows, Section 2 covers related
works, including Bayesian Optimization Algorithm, acquisition functions, and
the difference between Evolutionary Algorithms and Estimation of Distribution
Algorithms. Section 3 introduces our proposal, hybridEAs. Section 4 shows the
experiment results and analysis. Finally, Section 5 concludes this paper.
2 Related works
2.1 Bayesian Optimization Algorithm (BOA)
BOA is a typical method of surrogate-assisted optimization. In practice, it has
proved to be a very effective approach for single-objective expensive optimization
problems with limited fitness evaluation times. Without loss of generality, the
Accelerating the Evolutionary Algorithms 3
optimization problem can be formulated as
max
xXf(x)(1)
where X Rdis the feasible space and f:RdR. Algorithm 1 outlines the
procedure of standard BOA. BOA determines the coordinates of the next sample
Algorithm 1: Bayesian Optimization Algorithm
Input: Number of initial samples : N; Maximum iteration :
M; Acquisition function : a
Output: Best solution : S
1Function BOA(N, M):
2XSampling(N)
3for i= 0 to N do
4Fif(Xi)
5end
6SbestSample(X, F )
7for t= 0 to M do
8mTrainGP(X, F ) # Train a Gaussian Process model
9x0argmaxxXa(X, F, m) # Maximize the acquisition function to
determine the next sample point
10 F0f(x0)
11 Smax(F0, S)
12 XXx0
13 FFF0
14 end
15 return S
x0by maximizing the acquisition function a. Up to now, many studies about
the design of acquisition functions have been published, and various acquisition
functions have different performances in balancing exploration and exploitation.
2.2 Acquisition functions
The balance of exploration and exploitation is the most concerning issue for re-
searchers in global optimization. As we described in Section 2.1, different acquisi-
tion functions apply different strategies to balance exploration and exploitation.
In this section, we will introduce 3 popular acquisition functions. Notice that all
explanation is for the maximization problem.
Probability of Improvement (PI) PI is one of the earliest proposed acquisi-
tion functions[12], the basic idea is to maximize the probability of improvement
between samples in the posterior probabilistic model and current best samples.
摘要:

AcceleratingtheEvolutionaryAlgorithmsbyGaussianProcessRegressionwith-greedyacquisitionfunctionRuiZhong1,EnzhiZhang2,andMasaharuMunetomo31GraduateSchoolofInformationScienceandTechnology,HokkaidoUniversity,Sapporo,060-0814,Japan,rui.zhong.u5@elms.hokudai.ac.jp,2GraduateSchoolofInformationScienceandTe...

展开>> 收起<<
Accelerating the Evolutionary Algorithms by Gaussian Process Regression with -greedy acquisition function.pdf

共11页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:11 页 大小:352.91KB 格式:PDF 时间:2025-04-24

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 11
客服
关注