Speaker recognition with two-step multi-modal deep cleansing
Ruijie Tao (National University of Singapore); Kong Aik Lee (Institute for Infocomm Research, ASTAR); Zhan Shi (Chinese University of Hong Kong, Shenzhen); Haizhou Li (The Chinese University of Hong Kong, Shenzhen)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Neural network-based speaker recognition has achieved significant improvement in recent years. A robust speaker representation learns meaningful knowledge from both hard and easy samples in the training set to achieve good performance. However, noisy samples (i.e., with wrong labels) in the training set induce confusion and cause the network to learn the incorrect representation. In this paper, we propose a two-step audio-visual deep cleansing framework to eliminate the effect of noisy labels in speaker representation learning. This framework contains a coarse-grained cleansing step to search for the peculiar samples, followed by a fine-grained cleansing step to filter out the noisy labels. Our study starts from an efficient audio-visual speaker recognition system, which achieves a close to perfect equal-error-rate (EER) of 0.01%, 0.07% and 0.13% on the Vox-O, E and H test sets. With the proposed multi-modal cleansing mechanism, four different speaker recognition networks achieve an average improvement of 5.9%. Code has been made available at: https: //github.com/TaoRuijie/AVCleanse.