Skip to main content

Pre-Training Transformer Decoder For End-To-End Asr Model With Unpaired Text Data

Changfeng Gao, Gaofeng Cheng, Runyan Yang, Han Zhu, Pengyuan Zhang, Yonghong Yan

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:07:14
10 Jun 2021

This paper presents a method to pre-train transformer-based encoder-decoder automatic speech recognition (ASR) models using sufficient target-domain text. During pre-training, we train the transformer decoder as a conditional language model with empty or artifical states, rather than the real encoder states. By this pre-training strategy, the decoder can learn how to generate grammatical text sequence before learning how to generate correct transcriptions. Contrast to other methods which utilize text only data to improve the ASR performance, our method does not change the network architecture of the ASR model or introduce extra component like text-to-speech (TTS) or text-to-encoder (TTE). Experimental results on LibriSpeech corpus show that the proposed method can relatively reduce the word error rate over 10%, using 960 hours transcriptions.

Chairs:
Jinyu Li

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00