SPEECHMOE2: MIXTURE-OF-EXPERTS MODEL WITH IMPROVED ROUTING
Zhao You, Shulin Feng, Dan Su, Dong Yu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:11:56
Mixture-of-experts based acoustic models with dynamic routing mechanisms have proved promising results for speech recognition. The design principle of router architecture is important for the large model capacity and high computational efficiency. Our previous work SpeechMoE only uses local grapheme embedding to help routers to make route decisions. To further improve speech recognition performance against varying domains and accents, we propose a new router architecture which integrates additional global domain and accent embedding into router input to promote adaptability. Experimental results show that the proposed SpeechMoE2 can achieve lower character error rate (CER) with comparable parameters than SpeechMoE on both multi-domain and multi-accent task. Primarily, the proposed method provides up to 1.6%~4.8% relative CER improvement for the multidomain task and 1.9%~17.7% relative CER improvement for the multi-accent task respectively. Besides, increasing the number of experts also achieves consistent performance improvement and keeps the computational cost constant.