Improving Reconstruction Loss Based Speaker Embedding In Unsupervised And Semi-Supervised Scenarios
Jaejin Cho, Piotr Zelasko, Jesús Villalba, Najim Dehak
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:14:02
Text-to-speech (TTS) models trained to minimize the spectrogram reconstruction loss can learn speaker embeddings without explicit speaker identity supervision, unlike x-vector speaker identification (SID) systems. Leveraging this way of speaker embedding learning can be useful in unsupervised/semi-supervised scenarios where non, or only some, of the training data have speaker labels. Thus, in this paper, we evaluate speaker embeddings learned by training the spectrogram prediction network under unsupervised/semi-supervised scenarios. We experimented with different data sampling strategies. The best one was sampling two different segments from the same utterance, namely \textit{A} and \textit{B}, where the spectrogram of {\it B} is predicted given the {\it B} phone sequence and the speaker embedding extracted from {\it A}. This method improved by 3.4\% relative in EER, compared to using the same utterance for both A and B without segmenting. In the unsupervised scenario, the best speaker embedding outperformed i-vectors, the state-of-the-art unsupervised speaker embedding, in speaker verification by 12.9\% relative in EER. We observed high correlation between reconstruction loss and speaker embedding quality. In the semi-supervised scenario, having more unlabeled data in training led to a better performance in speaker verification. Adding 5314 unlabeled speakers to 800 labeled speakers improved EER by 10.8 \% relative.
Chairs:
Takafumi Koshinaka