Skip to main content

Reducing Language Confusion for Code-switching Speech Recognition with Token-level Language Diarization

Hexin Liu (Nanyang Technological University); Haihua Xu (Temasek Laboratories, Nanyang Technological University, Singapore); Paola Garcia (Johns Hopkins University); Andy W H Khong (Nanyang Technological University); Yi He (Bytedance); Sanjeev Khudanpur (Johns Hopkins University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

Code-switching (CS) occurs when languages switch within a speech signal and leads to language confusion for automatic speech recognition (ASR). We address the problem of language confusion for improving CS-ASR from two perspectives: incorporating and disentangling language information. We incorporate language information within the CS-ASR model by dynamically biasing the model with token-level language posteriors corresponding to outputs of a sequence-to-sequence auxiliary language diarization (LD) module. In contrast, the disentangling process reduces the difference between languages via adversarial training so as to normalize two languages. We conduct the experiments on the SEAME dataset. Compared to the baseline model, both the joint optimization with LD and the language posterior bias achieve performance improvement. Comparison of the proposed methods indicates that incorporating language information is more effective than disentangling for reducing language confusion in CS speech.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00