A Streaming On-Device End-To-End Model Surpassing Server-Side Conventional Model Quality And Latency
Tara Sainath, Rohit Prabhavalkar, Bo Li, Ke Hu, Golan Pundak, Cal Peyser, Trevor Strohman, David Rybach, Antoine Bruguier, David Garcia, Ruoming Pang, Arun Narayanan, Yonghui Wu, Yanzhang He, Anjuli Kannan, Shuo-yiin Chang, Chung-cheng Chiu, Yu Zhang, Zhi
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:55
Thus far, end-to-end (E2E) models have not been shown to outperform state-of-the-art conventional models with respect to both quality, i.e., word error rate (WER), and latency, i.e., the time the hypothesis is finalized after the user stops speaking. In this paper, we develop a first-pass Recurrent Neural Network Transducer (RNN-T) model and a second-pass Listen, Attend, Spell (LAS) rescorer that surpasses a conventional model in both quality and latency. On the quality side, we incorporate a large number utterances across varied domains \cite{Arun19} to increase acoustic diversity and the vocabulary seen by the model. We also train with accented English speech to make the model more robust to different pronunciations. In addition, given the increased amount of training data, we explore a varied learning rate schedule. On the latency front, we explore using the end-of-sentence decision emitted by the model to close the microphone, and also introduce various optimizations to improve the speed of LAS rescoring. Overall, we find that RNN-T+LAS offers a better WER and latency tradeoff compared to a conventional model. For example, for the same latency, RNN-T+LAS obtains a 8\% relative improvement in WER.