Distilling Attention Weights For Ctc-Based Asr Systems
Takafumi Moriya, Hiroshi Sato, Tomohiro Tanaka, Takanori Ashihara, Ryo Masumura, Yusuke Shinohara
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:07
We present a novel training approach for connectionist temporal classification (CTC) -based automatic speech recognition (ASR) systems. CTC models are promising for building both a conventional acoustic model and an end-to-end (E2E) ASR model. However, CTC models make it difficult to capture the correct timing of each output label because timing is not given explicitly in the training data. In this paper, we propose a new auxiliary task with frame-wise targets for CTC model enhancement. We utilize attention weights generated by an attention-based encoder-decoder model (S2S) for making the targets, called the attention matrix. The attention matrix is the sum of the products of the attention weights (spike timing information) and the corresponding target vectors (probability information), and used for S2S-to-CTC knowledge distillation loss computation. Therefore, the attention matrix makes the CTC models jointly trainable as regards spike timings and their posteriors. Experiments on Japanese ASR tasks demonstrate that our proposal is effective for CTC model training; it achieves a 10.2% (E2E) / 9.4% (acoustic model) relative reduction in the character/kana-syllable error rates compared to models trained using only CTC loss.