Exploring universal singing speech language identification using self-supervised learning based front-end features
Xingming Wang (Wuhan University); Hao Wu ( Speech, Audio and Music Intelligence (SAMI) group, ByteDance); Chen Ding (Speech, Audio and Music Intelligence (SAMI) group, ByteDance); Chuanzeng Huang (Speech, Audio and Music Intelligence (SAMI) group, ByteDance ); Ming Li (Duke Kunshan University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Despite the great performance of language identification (LID), there is a lack of large-scale singing LID databases to support the research of singing language identification (SLID). This paper proposed a over 3200 hours dataset used for singing language identification, called Slingua. As the baseline, we explore two self-supervised learning (SSL) models, WavLM and Wav2vec2.0, as the feature extractors for both SLID and universal singing speech language identification (ULID), compared with the traditional handcraft feature. Moreover, by training with speech language corpus, we compare the performance difference of the universal singing speech language identification. The final results show that the SSL-based features exhibit more robust generalization, especially for low-resource and open-set scenarios. The database can be downloaded following this repository: https://github.com/Doctor-Do/Slingua.