DASA: Difficulty-Aware Semantic Augmentation for Speaker Verification
Yuanyuan Wang (Tsinghua University); Yang Zhang (Tsinghua University); Zhiyong Wu (Tsinghua University); Zhihan Yang (tsinghua); Tao Wei (Ping An Technology); Kun Zou (Ping An Technology ); Helen Meng (The Chinese University of Hong Kong)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Data augmentation is vital to the generalization ability and robustness of deep neural networks (DNNs) models. Existing augmentation methods for speaker verification manipulate the raw signal, which are time-consuming and the augmented samples lack diversity. In this paper, we present a novel difficulty-aware semantic augmentation (DASA) approach for speaker verification, which can generate diversified training samples in speaker embedding space with negligible extra computing cost. Firstly, we augment training samples by perturbing speaker embeddings along semantic directions, which are obtained from speaker-wise covariance matrices. Secondly, accurate covariance matrices are estimated from robust speaker embeddings during training, so we introduce difficulty-aware additive margin softmax (DAAM-Softmax) to obtain optimal speaker embeddings. Finally, we assume the number of augmented samples goes to infinity and derive a closed-form upper bound of the expected loss with DASA, which achieves compatibility and efficiency. Extensive experiments demonstrate the proposed approach can achieve a remarkable performance improvement. The best result achieves a 14.6% relative reduction in EER metric on CN-Celeb evaluation set.