Accidental Learners: Spoken Language Identification in Multilingual Self-Supervised Models
Travis M Bartley (NVIDIA; CUNY); Fei Jia (NVIDIA Corporation); Krishna C Puvvada (NVIDIA); Samuel Kriman (NVIDIA); Boris Ginsburg (NVIDIA)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
In this paper, we extend previous self-supervised approaches for language identification by experimenting with Conformer based architecture in a multilingual pre-training paradigm. We find that pre-trained speech models optimally encode language discriminatory information in lower layers. Further, we demonstrate that the embeddings obtained from these layers are significantly robust to classify unseen languages and different acoustic environments without additional training. After fine-tuning a pre-trained Conformer model on the VoxLingua107 dataset, we achieve results similar to current state-of-the-art systems for language identification. More, our model accomplishes this with 5x less parameters. We open-source the model through the NVIDIA NeMo toolkit.