STRUCTURED STATE SPACE DECODER FOR SPEECH RECOGNITION AND SYNTHESIS
Koichi Miyazaki (CyberAgent, Inc.); Masato Murata (CyberAgent, Inc.); Tomoki Koriyama (CyberAgent, Inc.)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Automatic speech recognition (ASR) systems developed in recent years have shown promising results with self-attention models (e.g., Transformer and Conformer), which are replacing conventional recurrent neural networks. Meanwhile, a structured state space model (S4) has been recently proposed, producing promising results for various long-sequence modeling tasks, including raw speech classification. The S4 model can be trained in parallel, similar to the Transformer model. In this study, we applied S4 as a decoder for ASR and text-to-speech (TTS) tasks, respectively, by comparing it with the Transformer decoder. For the ASR task, our experimental results demonstrate that the proposed model achieves a competitive word error rate (WER) of 1.88%/4.25% on the LibriSpeech test-clean/test-other set and a character error rate (CER) of 3.80%/2.63%/2.98% on the CSJ eval1/eval2/eval3 set. Furthermore, the proposed model is more robust than the standard Transformer model, particularly for long-form speech on both the datasets. In the TTS task, the proposed method outperforms the Transformer baseline.