TRANSFORMER-BASED CONDITIONAL GENERATIVE
ADVERSARIAL NETWORK FOR MULTIVARIATE TIME
SERIES GENERATION
Abdellah Madane
madane@lipn.univ-paris13.fr Mohamed-Djallel DILMI
dilmi@lipn.univ-paris13.fr Florent Forest
forest@lipn.univ-paris13.fr
Hanane AZZAG
azzag@lipn.univ-paris13.fr Mustapha Lebbah
Mustapha.lebbah@uvsq.fr
Je´roˆme
Lacaille
jerome.lacaille@safrangroup.com
ABSTRACT
Conditional generation of time-dependent data is a task that has much interest,
whether for data augmentation, scenario simulation, completing missing data, or
other purposes. Recent works proposed a Transformer-based Time series genera-
tive adversarial network (TTS-GAN) to address the limitations of recurrent neu-
ral networks. However, this model assumes a unimodal distribution and tries to
generate samples around the expectation of the real data distribution. One of its
limitations is that it may generate a random multivariate time series; it may fail
to generate samples in the presence of multiple sub-components within an overall
distribution. One could train models to fit each sub-component separately to over-
come this limitation. Our work extends the TTS-GAN by conditioning its gener-
ated output on a particular encoded context allowing the use of one model to fit a
mixture distribution with multiple sub-components. Technically, it is a conditional
generative adversarial network that models realistic multivariate time series under
different types of conditions, such as categorical variables or multivariate time se-
ries. We evaluate our model on UniMiB Dataset, which contains acceleration data
following the XYZ axes of human activities collected using Smartphones. We
use qualitative evaluations and quantitative metrics such as Principal Component
Analysis (PCA), and we introduce a modified version of the Frechet inception
distance (FID) to measure the performance of our model and the statistical sim-
ilarities between the generated and the real data distributions. We show that this
transformer-based CGAN can generate realistic high-dimensional and long data
sequences under different kinds of conditions.
1
INTRODUCTION
Conditional generative adversarial networks have attracted significant interest recently (Hu et al.,
2021; Liu & Yin, 2021; Liu et al., 2021). The quality of generated samples by such models is im-
proving rapidly. One of their most exciting applications is multivariate time series generation, par-
ticularly when considering contextual knowledge to carry out this generation. Most published works
address this challenge by using recurrent architectures (Lu et al., 2022), which usually struggle with
long time series due to vanishing or exploding gradients. One other way to process sequential data
is via Transformer-based architectures. In the span of five years, Transformers have repeatedly ad-
vanced the state-of-the-art on many sequence modeling tasks (Yang et al., 2019)(Radford et al.,
2019)(Conneau & Lample, 2019). Thus, it was a matter of time before we could see transformer-
based solutions for time series (Yoon et al., 2019)(Wu et al., 2020)(Mohammadi Farsani & Pazouki,
2020), and particularly for multivariate time series generation (Li et al., 2022)(Leznik et al., 2021).
These studies showed promising results. Consequently, whether Transformer-based techniques are
suitable for conditional multivariate time series generation is an interesting problem to investigate.
Our work extends the TTS-GAN (Li et al., 2022) by conditioning its generated output on a par-
ticular encoded context allowing the use of one model to fit a mixture distribution with multiple
sub-components. Our contributions are summarized as follows: