Skip to main content

Factorized AED: Factorized Attention-based Encoder-Decoder for Text-only Domain Adaptive ASR

Xun Gong (Shanghai Jiaotong University); wei wang (Shanghai Jiao Tong University); Hang Shao (Shanghai Jiao Tong University); Yanmin Qian (Shanghai Jiao Tong University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
08 Jun 2023

End-to-end automatic speech recognition~(ASR) systems have gained popularity given their simplified architecture and promising results. However, text-only domain adaptation remains a big challenge for E2E systems. Text-to-speech~(TTS) based approaches require an auxiliary TTS model and the ASR model should be fine-tuned on the syntheiszed data, which cost high to deploy. Language model~(LM) fusion based approaches can achieve good performance but are sensitive to interpolation parameters. In order to factorize out the language component in the AED model, we propose the factorized attention-based encoder-decoder~(Factorized AED) model whose decoder takes as input the posterior probabilities of a jointly trained LM. Moreover, in the context of domain adaptation, the domain specific LM serve as a plug-and-play component for a well-trained factorized AED model. In-domain experiments on LibriSpeech and out-of-domain experiments adapting from LibriSpeech to a variety of domains in GigaSpeech are conducted to validate the effectiveness of our proposed methods. Results show 13\% / 24\% relative word error rate~(WER) reduction for LibriSpeech test sets and 8$\sim$34\% relative WER reduction for 8\% GigaSpeech target domains test sets compared to the AED baseline.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00