
However, the necessity of a training dataset makes the technique prohibitive in many applications of
scientific discovery. A chicken-and-egg problem arises in the case of fragile or live specimens: without
a reference object dataset, we cannot create a faster imaging method, but without the faster imaging
method the training object dataset cannot be obtained. In this work, we outline a reconstruction
method that only requires a representative dataset of sparse or partial measurements on each object. To
circumvent the need for complete training dataset pairs, we look to jointly reconstruct a set of similar
objects, each with a low number of measurements. By pooling information from measurements
across the set and incorporating the known forward physics of imaging, we aim to jointly infer the
prior distribution and posterior distributions. We aim to allow for improved reconstructions with
fewer measurements per object by using information from multiple similar objects.
More precisely, computational imaging aims to reconstruct some object
O
from a sequence of
n
noisy measurements
M= [M1, M2, ..., Mn]
. We aim to lower the total number of measurements
n
to minimize data acquisition time. We assume that we have a set of
m
objects
{O1, O2, ..., Om}
,
sampled from some distribution
P(O)
, and we aim to reconstruct all objects in the set. For each of
the
m
objects, we are allowed
n
measurements. Each sequence of measurements for an object
j
,
Mj= [Mj1, Mj2, ..., Mjn]
is obtained with chosen hardware parameters
pj= [pj1, pj2, ..., pjn]
(e.g. rotation angles in the case of computed tomography or the LED illumination patterns in the
case of LED array microscopy). We assume that the forward model physics
P(M|O;p) = P(M|O)
is known. For every object
O
, we aim to find the posterior distribution
P(O|M) = P(M|O)P(O)
P(M)
.
The following problems arise in finding the posterior: (1) construction of the prior
P(O)
with no
directly observed
O
, only indirect measurements
M
on each object of the set, and (2) calculating
P(O|M)
in a tractable manner. To efficiently solve this problem, we create a novel technique through
a reformulation of variational autoencoders. The probabilistic formulation considered in this work
permits uncertainty quantification, in contrast to most reconstruction algorithms that only yield a
point estimate.
2 Related Work
Deep learning has been widely applied to reduce the data acquisition burden of computational imaging
systems. In one line of research, training pairs of sparse measurements and corresponding high
quality reconstructions are used to train a deep convolutional neural network, and implicitly embed
prior information [16
–
40]. Subsequent sparse measurements can be reconstructed with a forward pass
of the trained neural network, with the benefit of avoiding computationally costly iterative algorithms.
Deep neural network approaches for object reconstruction have the advantage of incorporating
knowledge about prior data and fast inference, but more traditional iterative (model-based) methods
have the advantage of utilizing the known forward physics model (i.e. how measurements are
generated, given the object). The advantages of these two approaches are combined by unrolling
an iterative method, with each iteration forming a layer of a neural network [41
–
53]. This unrolled
deep neural network can be trained to optimize iterative algorithm hyperparameters for a given
training dataset, effectively optimizing an optimizer. The unrolled iterative methods have been shown
to require less training data and time than a convolutional neural network approach, due to the
incorporation of the forward model.
Building off of this literature, a second body of approaches attempts to include the measurement
process in neural network training, to discern the optimal measurement parameters (e.g. the LED
illumination patterns in LED array microscopy) for sparse sampling and subsequent reconstruction.
In these works, high quality reconstructions are needed, and corresponding noisy measurements are
emulated with the known forward physics. The measurement process is included as the encoder part
of an autoencoder neural network, and co-optimized with the reconstruction algorithm, which forms
the decoder. Many works use a convolutional neural network as the decoder [54
–
75] and others
use an unrolled iterative solver [76
–
78]. Co-optimizing the measurement parameters has the benefit
of reducing the measurements required for computational imaging further than keeping them fixed
during training. However, this approach still requires a reference training dataset.
In this work, we look to remove the need for ground-truth or reference reconstructions. We aim to
create a reconstruction method that only requires a representative dataset of sparse measurements
on each object. This task has been previously undertaken, usually with generative adversarial
networks [79
–
87]. The intuition is that by using different experimental measurement parameters
2