Nevertheless, disease progression and treatment decisions are strongly dependent on maximum
tumor diameter and tumor volume, as well as the corresponding morphological changes during a
treatment period. The imaging method of choice here is magnetic resonance imaging (MRI).
However, MRI does not provide any semantic information for brain structures or the brain tumor
per se. This has to be done manually, semi-manually or automatically, in a post-processing step,
commonly referred to as a segmentation. Manually performed, however, a segmentation is very
time-consuming and operator-dependent, especially when performed in a three-dimensional
image volume [15], which needs slice-by-slice contouring. Hence, an automatic (algorithmic)
segmentation is desired, especially when large quantities of data volumes have to be processed.
Even if it is still considered an unsolved problem, there has been steady progress from year to
year; and data-driven approaches, like deep neural networks, currently provide the best (fully
automatic) results. However, a segmentation with a data-driven approach, like deep learning [16],
comes with several burdens: Firstly, the algorithm generally needs massive annotated training
data. Additionally, for inter-patient disease monitoring, several segmentations have to be
performed, and in addition, these scans have to be registered to each other (which also adds
uncertainty to the overall procedure, especially when deformed soft-tissue comes into play [17]).
In this regard, we want to tackle these problems with a personalized neural network that needs
just the patient’s data, no annotations and no extra registration step. To the best of our knowledge,
this is the first study using this little training data to train a deep neural network in the medical
domain. The method addresses the issues of gathering big datasets in medicine and producing
a privacy-safe network. The approach is considered as unsupervised learning as no data
annotation is necessary. We evaluate the model with an ROC analysis as well as modified RANO
criteria on two different datasets of longitudinal MRI images of patients with glioblastoma.
2 Methods
2.1 Model architecture and training
The neural network architecture used in this study is based upon Wasserstein GANs [18]. This is
a modified version of Generative Adversarial Networks (GAN) [6]. These are a form of deep neural
networks in which two sub-models are trained adversarily in a sum-zero game. A generator is
trained to create new images, while a discriminator is trained to distinguish between real and
synthetic images. In Wasserstein GANs the discriminator is modified to a critic function which
leads to more stable training [18].
Our network architecture is similar to the model used by Baumgartner et al [19]. The aim of the
network is to create a map which transforms an image from the first timepoint (t1) to the second
timepoint (t2). This will make the model learn to represent the changes between the images, more
specifically tumor growth/reduction in our case. To do this, augmented versions of the image at
t1 are used as input to the generator. The generator will try to create a map that, when added to
the input image creates an image of t2. The critic will try to distinguish these generated synthetic
t2 images from the real t2 images. Thereby forcing the generator to learn the differences between
the two timepoints.