10 HOURS DATA IS ALL YOU NEED Zeping MinQian GeZhong Liy Peking University

2025-04-30 0 0 575.2KB 5 页 10玖币
侵权投诉
10 HOURS DATA IS ALL YOU NEED
Zeping Min?Qian Ge?Zhong Li
?Peking University
Microsoft Research Asia
ABSTRACT
We propose a novel procedure to generate pseudo mandarin
speech data named as CAMP (character audio mix up), which
aims at generating audio from a character scale. We also
raise a method for building a mandarin character scale au-
dio database adaptive to CAMP named as META-AUDIO,
which makes full use of audio data and can greatly increase
the data diversity of the database. Experiments show that our
CAMP method is simple and quite effective. For example,
we train models with 10 hours of audio data in AISHELL-
1 and pseudo audio data generated by CAMP, and achieve
a competitive 11.07 character error rate (CER). Besides, we
also perform training with only 10 hours of audio data in
AIDATATANG dataset and pseudo audio data generated by
CAMP, which again achieves a competitive 8.26 CER.
Index TermsAutomatic speech recognition, data aug-
mentation, mandarin, pseudo label.
1. INTRODUCTION
Training a practical neural network (NN) model often requires
a large amount of labeled data. However, obtaining anno-
tated data in application is usually rather expensive and labor-
intensive. There are lots of efforts to reduce the dependence
of NN models on huge amounts of data, such as [1], and [2].
In the field of speech recognition, it is equally necessary to
provide sufficient training data for deep NN models. Inspired
by the pseudo label method ([2]) and mix up method ([1]), we
propose a novel procedure to generate pseudo-labeled data,
named as character audio mix up (CAMP), to alleviate the
heavy data dependence in automatic speech recognition.
In summary, our contributions are as follows:
We successfully combine the advantages of pseudo la-
bel semi-supervised learning and mix up data augmen-
tation to propose a novel, simple and effective proce-
dure to generate pseudo-labeled speech data named as
character audio mix up (CAMP).
We propose a META-AUDIO method for building a
mandarin character scale audio database adaptive to the
?Equal contribution
Corresponding author
CAMP. The META-AUDIO takes full advantage of au-
dio data and can greatly increase the data diversity in
the database, as well as reduce the difficulty of building
the database.
Experiments show that the CAMP and META-AUDIO
method are simple but quite effective. Training mod-
els with 10 hours of audio data in AISHELL-1 to-
gether with pseudo audio data generated by CAMP, we
achieve a competitive 11.07 character error rate (CER).
Besides, we also perform training with only 10 hours
of audio data in the AIDATATANG dataset and pseudo
audio data generated by CAMP, and again achieves a
competitive 8.26 CER.
2. RELATED WORK
A lot of effort has been made to obtain satisfying modeling
given a limited size of training samples. In application, one
of the most effective ways is data augmentation [3], and [4] ,
which is often designed carefully based on the nature of data
itself, and hence implies a certain pertinence. For instance,
in the field of computer vision (CV), common approaches of
data augmentation ([5]) typically include cropping, rotation,
mix up ([1], [6]) and so on, which are specifically developed
for images. In the field of automatic speech recognition, the
data augmentation is often conducted as follows. One way
is to perform data augmentation from the frequency domain.
For example, [7] implemented data augmentation by a ran-
dom linear transformation on the frequency dimension of the
spectrogram. Another way to perform data augmentation is
from the time domain. For example, in [8], a large amount
of audio in a noisy environment was synthesized by mixing
the clean audio with the noises, followed by an appropriate
filtering through the average power.
Besides data augmentation, semi-supervised learning, to
improve the model performance with unlabeled training data,
is also popularly applicable and successful in many scenarios.
There are mainly two types of solutions for semi-supervised
learning. The first is to take advantage of the continuity as-
sumption that if an actual perturbation is applied to an un-
labeled data, the prediction should not change significantly.
Hence, minimizing the distance between the unlabeled data
arXiv:2210.13067v1 [cs.SD] 24 Oct 2022
摘要:

10HOURSDATAISALLYOUNEEDZepingMin?QianGe?ZhongLiy?PekingUniversityyMicrosoftResearchAsiaABSTRACTWeproposeanovelproceduretogeneratepseudomandarinspeechdatanamedasCAMP(characteraudiomixup),whichaimsatgeneratingaudiofromacharacterscale.Wealsoraiseamethodforbuildingamandarincharacterscaleau-diodatabasead...

展开>> 收起<<
10 HOURS DATA IS ALL YOU NEED Zeping MinQian GeZhong Liy Peking University.pdf

共5页,预览1页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:5 页 大小:575.2KB 格式:PDF 时间:2025-04-30

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 5
客服
关注