Skip to main content

SELF-SUPERVISED LEARNING WITH BI-LABEL MASKED SPEECH PREDICTION FOR STREAMING MULTI-TALKER SPEECH RECOGNITION

Zili Huang (Johns Hopkins University); Zhuo Chen (Microsoft); Naoyuki Kanda (Microsoft); Jian Wu (Microsoft); Yiming Wang (Microsoft Corporation); Jinyu Li (Microsoft); Takuya Yoshioka (Microsoft); Xiaofei Wang (Microsoft Corp.); Peidong Wang (Microsoft)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

Self-supervised learning (SSL), which utilizes the input data itself for representation learning, has achieved state-of-the-art results for various downstream speech tasks. However, most of the previous studies focused on offline single-talker applications, with limited investigations in multi-talker cases, especially for streaming scenarios. In this paper, we investigate SSL for streaming multi-talker speech recognition, which generates transcriptions of overlapping speakers in a streaming fashion. Firstly, we observe that conventional SSL techniques do not work well on this task due to the poor representation of overlapping speech. We then propose a novel SSL training objective, referred to as bi-label masked speech prediction, which explicitly preserves representations of all speakers in overlapping speech. We investigate various aspects of the proposed system, including data configuration and quantizer selection. The proposed SSL setup achieves substantially better word error rates on the LibriSpeechMix dataset.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00