The DKU-Tencent System for the VoxCeleb Speaker Recognition Challenge 2022 Xiaoyi Qin12Na Li2 Yuke Lin1 Yiwei Ding2 Chao Weng2 Dan Su2 Ming Li1

2025-05-06 0 0 322.64KB 4 页 10玖币
侵权投诉
The DKU-Tencent System for the VoxCeleb Speaker Recognition Challenge
2022
Xiaoyi Qin1,2Na Li2, Yuke Lin1, Yiwei Ding2, Chao Weng2, Dan Su2, Ming Li1
1Data Science Research Center, Duke Kunshan University, Kunshan, China
2Tencent AI Lab, Shenzhen, China
ming.li@whu.edu.cn, 2020102110042@whu.edu.cn
Abstract
This paper is the system description of the DKU-Tencent
System for the VoxCeleb Speaker Recognition Challenge 2022
(VoxSRC22). In this challenge, we focus on track1 and track3.
For track1, multiple backbone networks are adopted to extract
frame-level features. Since track1 focus on the cross-age sce-
narios, we adopt the cross-age trials and perform QMF to cal-
ibrate score. The magnitude-based quality measures achieve
a large improvement. For track3, the semi-supervised domain
adaptation task, the pseudo label method is adopted to make do-
main adaptation. Considering the noise labels in clustering, the
ArcFace is replaced by Sub-center ArcFace. The final submis-
sion achieves 0.107 mDCF in task1 and 7.135% EER in task3.
Index Terms: speech recognition, cross-age, semi-supervised,
domain-adaptation
1. Task1: Fully supervised speaker
verification
1.1. Data Augmentation
We adopt the on-the-fly data augmentation [1] to add additive
background noise or convolutional reverberation noise for the
time-domain waveform. The MUSAN [2] and RIR Noise [3]
datasets are used as noise sources and room impulse response
functions, respectively. The convolution operation is performed
for the reverberation with 40,000 simulated room impulse re-
sponses. We only use RIRs from small and medium rooms.
To further diversify training samples, we apply amplification
or playback speed change (pitch remains untouched) to audio
signals. Also, we apply speaker augmentation with speed per-
turbation [4]. We speed up or down each utterance by a factor
of 0.9 or 1.1, yielding shifted pitch utterances that are consid-
ered from new speakers. As a result, the training data includes
3,276,027(1,092,009 ×3) utterances from 17,982(5,994 ×
3) speakers. The probability of Noise/RIR/Tempo/Volume is
0.66.
1.2. Speaker Embedding Model
Generally, the speaker embedding model consists of three parts:
backbone network, encoding layer and loss function. The Ar-
cFace [5], which could increase intra-speaker distances while
ensuring inter-speaker compactness, is used as a loss function.
The backbone and encoding layer are reported as follows.
* Equal contribution. The work was done at Tencent during the
internship for the first author
1.2.1. Backbone Network
In this module, we introduce four different speaker ver-
ification systems, including the SimAM-ResNet34[6], the
ResNet101[7], ResNet152, and Res2Net101[8]. The acoustic
features are 80-dimensional log Mel-filterbank energies with a
frame length of 25ms and hop size of 10ms. The extracted fea-
tures are mean-normalized before feeding into the deep speaker
network.
1.2.2. Encoding Layer
The encoding layer could transform the variable length feature
maps into a fixed dimension vector, that is, the frame-level fea-
ture is converted to the segment-level feature. In this module,
we adopt the statistic pooling[9], standard deviation pooling
and attentive statistic pooling[10]. The statistic pooling layer
computes the mean and standard deviation of the output feature
maps along with the frequency dimension. The standard devia-
tion pooling layer only calculates the standard deviation.
1.3. Training Strategy and Implement Detail
The speaker embedding model is trained through two stages. In
the first stage, the baseline system adopts the SGD optimizer.
We adopt the ReduceLROnPlateau scheduler with 2 epochs of
patience. The init learning rate starts from 0.1, the minimum
learning rate is 1.0e-4, and the decay factor is 0.1. The margin
and scale of ArcFace are set as 0.2 and 32, respectively. We per-
form a linear warmup learning rate schedule at the first 2 epochs
to prevent model vibration and speed model training. The input
frame length is fixed at 200 frames. In the second stage, we
following the SpeakIn training protocol[11] with large margin
fine-tune (LMFT)[12]. In the LMFT stage, we also remove the
speaker augmentation and reduce the probability of data aug-
mentation to 0.33. What’s more, the margin is increased from
0.2 to 0.5. According to the speaker embedding model size and
the memory limit of GPU, the frame length is expended from
200 to 400 or 600. The learning rate decays from 1.0e-4 to
1.0e-5.
1.4. Score Calibration and Normalization
The cosine similarity is the back-end scoring method. After
scoring, results from all trials are subject to score normaliza-
tion. We utilize Adaptive Symmetric Score Normalization (AS-
Norm) [13] in our systems. The imposter cohort consists of the
average of the length normalized utterance-based embeddings
of each training speaker, i.e., a speaker-wise cohort with 5994
embeddings.
Quality Measure Functions could calibrate the scores and
improve the system performance. As [14] described, we adopt
the two qualities set: speech duration and magnitude rate. For
arXiv:2210.05092v1 [cs.SD] 11 Oct 2022
摘要:

TheDKU-TencentSystemfortheVoxCelebSpeakerRecognitionChallenge2022XiaoyiQin1;2NaLi2,YukeLin1,YiweiDing2,ChaoWeng2,DanSu2,MingLi11DataScienceResearchCenter,DukeKunshanUniversity,Kunshan,China2TencentAILab,Shenzhen,Chinaming.li@whu.edu.cn,2020102110042@whu.edu.cnAbstractThispaperisthesystemdescripti...

展开>> 收起<<
The DKU-Tencent System for the VoxCeleb Speaker Recognition Challenge 2022 Xiaoyi Qin12Na Li2 Yuke Lin1 Yiwei Ding2 Chao Weng2 Dan Su2 Ming Li1.pdf

共4页,预览1页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:4 页 大小:322.64KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 4
客服
关注