
2
ters of the objective function many times over. The less
prior knowledge about the system is known, the longer
this process may take.
The underlying problem is essentially one of compres-
sion, i.e. that the objective function needs to reduce a
complex distribution function to a single number char-
acterizing said distribution. It is impossible to do this
without information loss for an unknown distribution
function. In fact, even if we knew the distribution, e.g.
a normal distribution, one would still need both mean
and variance to describe it without ambiguities. In the
case of an unknown one-dimensional distribution func-
tion, we can use multiple statistical descriptions to cap-
ture essential features of the distribution such as the cen-
tral tendency (weighted arithmetic or truncated mean,
the median, mode, percentiles, etc.) and the statistical
dispersion of the distribution (full width at half max-
imum, median absolute deviation, standard deviation,
maximum deviation, etc.). These measures weigh dif-
ferent features in the distribution differently. One may
also include higher-order features such as the skewness,
which occurs for instance as a sign of beam loading in en-
ergy spectra of laser-plasma accelerators [8], or coupling
terms between the different parameters. Last, the am-
plitude or integral of the distribution function are often
parameters of interest [16].
In the following, we will discuss optimizations of elec-
tron energy spectra according to different objective def-
initions and then present a more general multi-objective
optimization.
The paper is structured as follows: First, we are going
to discuss details of the simulated laser-plasma accelera-
tor used for our numerical experiments (Section II) and
introduce Bayesian optimization (Section III). Then we
present results from optimization runs using different def-
initions of scalarized objectives that aim for beams with
high charge and low energy spread at a certain target en-
ergy (Section IV). We then compare these results with an
optimization using effective hypervolume optimization of
all objectives (Section V). In Section VI we discuss some
of the physics that the optimizer ’discovers’ during opti-
mization and in the last section, we summarize our results
and outline perspectives for future research (Section VII).
II. LASER-PLASMA ACCELERATOR
As a test system for optimization, we use an example
from the realm of plasma-based acceleration, i.e. a laser
wakefield accelerator with electron injection in a sharp
density downramp [8,17]. The basic scenario here is that
electrons get trapped in a laser-driven plasma wave due
to a local reduction in the plasma density, which is of-
ten realized experimentally as a transition from one side
to the other of a hydrodynamic shock, hence the often-
used name ”shock injection”. The number of electrons
injected at this density transition strongly depends on
the laser parameters at the moment of injection, but also
°1 0 1 2 3
Position z[mm]
0.0
0.5
1.0
1.5
Plasma density [ne]
FIG. 1. Illustration of the four variable input parame-
ters from Table I, namely the upramp length lup, the down-
ramp length ldown, the plateau density neand the focus po-
sition z0.
on the plasma density itself. Both parameters also affect
the final energy spectrum the electrons exhibit at the end
of the acceleration process. Here we will use simulations
to investigate this system, the primary reason being that
they are perfectly reproducible and do not require addi-
tional handling of jitter, drifts, and noise. However, the
methods outlined in this paper are equally relevant to
experiments. The input space consists of four variable
parameters, namely the plateau plasma density, the po-
sition of laser focus, as well as the lengths of the up- and
downramps of the plasma density close to the density
transition.
While the shock injection scenario is sufficiently com-
plex to require particle-in-cell codes, we use the code
FBPIC by Lehe et al. [18] in conjunction with various
optimizations to achieve an hour-scale run-time. On the
hardware side, the code is optimized to run on NVIDIA
GPUs (here we used Tesla V100 or RTX3090), while the
physical model includes optimizations such as the usage
of a cylindrical geometry with Fourier decomposition in
the angular direction and boosted-frame moving windows
[19]. Additionally, we can take advantage of the very lo-
calized injection to locally increase the macro-particles
density in the injection area [8]. Similarly, the linear
wakefields forming in regions of lower laser intensity re-
sult in a nearly laminar flow of particles, meaning that
we can decrease the macro-particle density far away from
the laser axis [20].
One particular challenge that arises in simulations over
a large range of parameters is that different input param-
eters may result in different computational requirements.
For instance, the transverse box size needs to be several
times larger than the beam waist to assure that the en-
ergy of a focusing beam is not lost. Hence, a laser that is
initialized out of focus requires a larger box size than a
beam initialized in focus. We address this by scaling the
transverse box size lras a function of the laser waist w(z)
at the beginning of the simulation. Similarly, the size of
the wakefield depends on the plasma density, and accord-
ingly, we scale the longitudinal size lzof the box with the
estimated wakefield size. By using these adapted simu-