
10 HOURS DATA IS ALL YOU NEED
Zeping Min?Qian Ge?Zhong Li†
?Peking University
†Microsoft Research Asia
ABSTRACT
We propose a novel procedure to generate pseudo mandarin
speech data named as CAMP (character audio mix up), which
aims at generating audio from a character scale. We also
raise a method for building a mandarin character scale au-
dio database adaptive to CAMP named as META-AUDIO,
which makes full use of audio data and can greatly increase
the data diversity of the database. Experiments show that our
CAMP method is simple and quite effective. For example,
we train models with 10 hours of audio data in AISHELL-
1 and pseudo audio data generated by CAMP, and achieve
a competitive 11.07 character error rate (CER). Besides, we
also perform training with only 10 hours of audio data in
AIDATATANG dataset and pseudo audio data generated by
CAMP, which again achieves a competitive 8.26 CER.
Index Terms—Automatic speech recognition, data aug-
mentation, mandarin, pseudo label.
1. INTRODUCTION
Training a practical neural network (NN) model often requires
a large amount of labeled data. However, obtaining anno-
tated data in application is usually rather expensive and labor-
intensive. There are lots of efforts to reduce the dependence
of NN models on huge amounts of data, such as [1], and [2].
In the field of speech recognition, it is equally necessary to
provide sufficient training data for deep NN models. Inspired
by the pseudo label method ([2]) and mix up method ([1]), we
propose a novel procedure to generate pseudo-labeled data,
named as character audio mix up (CAMP), to alleviate the
heavy data dependence in automatic speech recognition.
In summary, our contributions are as follows:
• We successfully combine the advantages of pseudo la-
bel semi-supervised learning and mix up data augmen-
tation to propose a novel, simple and effective proce-
dure to generate pseudo-labeled speech data named as
character audio mix up (CAMP).
• We propose a META-AUDIO method for building a
mandarin character scale audio database adaptive to the
?Equal contribution
†Corresponding author
CAMP. The META-AUDIO takes full advantage of au-
dio data and can greatly increase the data diversity in
the database, as well as reduce the difficulty of building
the database.
• Experiments show that the CAMP and META-AUDIO
method are simple but quite effective. Training mod-
els with 10 hours of audio data in AISHELL-1 to-
gether with pseudo audio data generated by CAMP, we
achieve a competitive 11.07 character error rate (CER).
Besides, we also perform training with only 10 hours
of audio data in the AIDATATANG dataset and pseudo
audio data generated by CAMP, and again achieves a
competitive 8.26 CER.
2. RELATED WORK
A lot of effort has been made to obtain satisfying modeling
given a limited size of training samples. In application, one
of the most effective ways is data augmentation [3], and [4] ,
which is often designed carefully based on the nature of data
itself, and hence implies a certain pertinence. For instance,
in the field of computer vision (CV), common approaches of
data augmentation ([5]) typically include cropping, rotation,
mix up ([1], [6]) and so on, which are specifically developed
for images. In the field of automatic speech recognition, the
data augmentation is often conducted as follows. One way
is to perform data augmentation from the frequency domain.
For example, [7] implemented data augmentation by a ran-
dom linear transformation on the frequency dimension of the
spectrogram. Another way to perform data augmentation is
from the time domain. For example, in [8], a large amount
of audio in a noisy environment was synthesized by mixing
the clean audio with the noises, followed by an appropriate
filtering through the average power.
Besides data augmentation, semi-supervised learning, to
improve the model performance with unlabeled training data,
is also popularly applicable and successful in many scenarios.
There are mainly two types of solutions for semi-supervised
learning. The first is to take advantage of the continuity as-
sumption that if an actual perturbation is applied to an un-
labeled data, the prediction should not change significantly.
Hence, minimizing the distance between the unlabeled data
arXiv:2210.13067v1 [cs.SD] 24 Oct 2022