Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:13:12
10 Jun 2021

Recently, several types of end-to-end speech recognition methods named as transformer-transducer have been introduced successfully. According to those kinds of methods, transcription networks are generally modelled by transformerbased neural networks, while prediction networks can be modelled by either of transformers or recurrent neural networks (RNN). In this paper, we propose novel multitask learning, joint optimization, and joint decoding methods for transformer-RNN-transducer systems. Main advantage of the proposed methods is that the model can maintain information on the large text corpus eliminating the necessity of an external language model (LM). We demonstrate the effectiveness of the proposed methods based on experiments utilizing the well known ESPNET toolkit on the widely used Librispeech datasets, and show that the proposed methods can reduce word error rate (WER) by 16.6 % and 13.3 % for test-clean and test-other datasets, respectively, without changing the overall model structure and without exploiting an external LM.

Chairs:
Xiaodong Cui

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00