Attention-Based Gated Scaling Adaptive Acoustic Model For Ctc-Based Speech Recognition
Fenglin Ding, Wu Guo, Li-Rong Dai, Jun Du
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:29
In this paper, we propose a novel adaptive technique that uses an attention-based gated scaling (AGS) scheme to improve deep feature learning for connectionist temporal classification (CTC) acoustic modeling. In AGS, the outputs of each hidden layer of the main network are scaled by an auxiliary gate matrix extracted from the lower layer by using an attention mechanism. Furthermore, the auxiliary AGS layer and the main network are jointly trained without requiring second-pass model training or additional speaker information, such as i-vector. On the Mandarin AISHELL-1 dataset, the proposed AGS yields a 7.94% character error rate (CER). To the best of our knowledge, the results obtained when training on the full AISHELL-1 training set, are the best published currently for the end-to-end systems.