
SCP-GAN: SELF-CORRECTING DISCRIMINATOR OPTIMIZATION FOR TRAINING
CONSISTENCY PRESERVING METRIC GAN ON SPEECH ENHANCEMENT TASKS
Vasily Zadorozhnyy∗,1, Qiang Ye1, and Kazuhito Koishida2
1Department of Mathematics, University of Kentucky, Lexington, USA
2Applied Sciences Group, Microsoft Corporation, Redmond, USA
1{vasily.zadorozhnyy, qye3}@uky.edu 2kazukoi@microsoft.com
ABSTRACT
In recent years, Generative Adversarial Networks (GANs)
have produced significantly improved results in speech en-
hancement (SE) tasks. They are difficult to train, however.
In this work, we introduce several improvements to the GAN
training schemes, which can be applied to most GAN-based
SE models. We propose using consistency loss functions,
which target the inconsistency in time and time-frequency
domains caused by Fourier and Inverse Fourier Transforms.
We also present self-correcting optimization for training a
GAN discriminator on SE tasks, which helps avoid “harm-
ful” training directions for parts of the discriminator loss
function. We have tested our proposed methods on several
state-of-the-art GAN-based SE models and obtained consis-
tent improvements, including new state-of-the-art results for
the Voice Bank+DEMAND dataset.
Index Terms—Speech Enhancement, Generative Adver-
sarial Network, MetricGAN, Self-Correcting Optimization,
STFT Consistency, Voice Bank+DEMAND
1. INTRODUCTION
Speech Enhancement (SE) is a process of making deteriorated
speech signals more understandable and perceptually pleas-
ing. The SE has been widely used for various applications, in-
cluding mobile communication, speech recognition systems,
hearing aids, etc. SE as an area of research interest has been
around for several decades. Traditional SE techniques [1,2]
often use a heuristic or straightforward signal processing al-
gorithm to estimate a gain function, which is then applied to
the noisy input to produce improved speech. Recent devel-
opments in deep learning have inspired many Deep Neural
Network (DNN)-based SE techniques [3,4,5,6,7] that out-
perform conventional signal processing-based methods. One
particular DNN-based architecture, Generative Adversarial
Net (GAN), has garnered much interest in the SE commu-
nity for the past few years [5,6,8,9]. In the applications
of SE, GAN architecture is primarily employed to generate
∗Work performed while Vasily Zadorozhnyy was an intern at Microsoft.
enhanced speech. One of the earliest works where GAN mod-
els were implemented on the SE domain is the SEGAN [5]
model. It utilizes an adversarial framework to map the noisy
waveform to a corresponding enhanced speech. Later, Met-
ricGAN [6] introduced a metric score optimization scheme,
where an evaluated metric was introduced into adversarial
loss functions, replacing a traditional binary-classifier [5] and
creating a new branch for SE GAN-based research. There
have been several improvements to the MetricGAN model,
e.g. MetricGAN+ [8], iMetricGAN [10], CMGAN [9], etc.
More recently, with a rise of Transformers [11] and Conform-
ers [12], models such as DB-AIAT [13], DPT-FSNet [14],
SE-Conformer [15], CMGAN [9], etc. show significant im-
provements on SE tasks.
Despite much work, training of GAN-based models are
prone to problems such as non-convergence, overfitting, and
gradient instabilities. One common issue in GAN’s discrimi-
nator training is potentially “harmful” gradient direction [16]
where parts of the model might train opposite to the desired
direction. To overcome this problem, we propose a new
method called Self-Correcting (SC) Discriminator Optimiza-
tion. At the same time, the SE DNN-based models are subject
to problems caused by the signal-processing tools, e.g., an in-
consistency in the Short-Time Fourier Transform (STFT) and
its inverse (iSTFT) [7,17]. Inspired by [18], we adapt and in-
troduce the consistency loss function as a part of Consistency
Preserving (CP) Net into the GAN framework, where loss
and architecture take into account the iSTFT effects. From
our experiments, the combination of SC and CP methods
improves the SE GAN-based models even further than either
single method; we call such a combination SCP-GAN.
The remainder of this paper is laid out as follows. In sec-
tion 2, we list earlier works pertinent to our work. In section
3, we introduce our improvements to current GAN-based SE
models. We present and compare the SCP-GAN results on
Voice Bank+DEMAND dataset [19] to the current state-of-
the-art (SOTA) models in section 4. Then, in section 5, we
provide an extensive ablation study to show the advantages of
the proposed methods. Finally, in section 6, we highlight the
methods’ contributions to the field.
arXiv:2210.14474v1 [cs.SD] 26 Oct 2022