
Figure 1: A schematic overview of the proposed inference pipeline for GISAXS data (A). In the first
step, conditional probability
p(X, z|ζ)
over GISAXS data
X
and latent variables
z
given the in-plane
GISAXS signal
ζ
is approximated by conditional VAEs (B). The probabilistic model allows us to
compute robust representation
c
that can be obtained even when only
ζ
is given as input. Afterwards,
we approximate posterior distribution
p(y|c)
over object parameters with normalizing flows (D),
yielding fast inference that allows accelerating feedback during experiments (C).
et al. [
5
] and Liu et al. [
6
] use convolutional neural networks for the one-step classification of
experimental images. At the same time, Van Herck et al. [
7
] infer the rotation distribution of
nanoparticle arrangements. Similar to our use case, Mironov et al. [
8
] use convolutional neural
networks with uncertainty quantification to estimate film parameters from neutron reflectivity curves.
Our contribution
In this paper, we develop an inference framework allowing for fast and robust
reconstruction of GISAXS data to accelerate GISAXS data analysis. As the signal-to-noise ratio
(SNR) of experimental images might suffer from distortions caused by grazing-incidence geometry
[
9
], we mainly focus on using an in-plane scattering signal
1
as input. Despite the recent success of
discriminative neural networks, such models do not account for the inherent ambiguity of reconstruc-
tion and, as a rule, do not provide a researcher with uncertainty quantification. Instead of learning
a function from images to parameters, we use the Bayesian approach and estimate the posterior
distribution of object parameters given the GISAXS data. Our framework has a two-fold structure.
In the first step, we learn a robust probabilistic representation of the GISAXS generative process
with variational auto-encoders [
10
]. Second, we model the posterior distribution via likelihood-free
inference [11] with normalizing flows [12].
2 Methods
Conditional variational auto-encoders
Variational autoencoders (VAE) [
10
] is a framework to
model the data generation process via approximation of joint probability
p(x, z)
over observed
variables
x
and latent variables
z
. It combines a generative model
pθ(x|z)
, an inference model
qφ(z|x)
and a prior
p(z)
, allowing unconditional data generation from a learned distribution model.
Conditional VAEs (CVAEs) [
13
] is an extension of the framework that models a conditional distri-
bution
p(x, z|y)
. To learn the model parameters
θ
,
φ
, one maximizes the conditional log-likelihood
log pθ(x|y)
via maximizing the evidence lower bound [
10
,
13
]. Subsequently, one can sample from
conditional distribution
pθ(x|y, z)
where random noise
z
attributes for the variance in reconstruction
of xfrom y. We model each distribution as Normal distribution with learnable mean and variance.
1
We define the in-plane scattering signal (profile) as the average of a central region (see Fig. 1A) of an image
over the lateral dimension of a detector with subtracted parasitic scattering signal from the beamstop.
2