Deep Neural Networks Based Automatic Speech Recognition For Four Ethiopian Languages
Solomon Teferra Abate, Martha Yifiru Tachbelie, Tanja Schultz
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:41
In this work, we present speech recognition systems for four Ethiopian languages: Amharic, Tigrigna, Oromo and Wolaytta. We have used comparable training corpora of about 20 to 29 hours speech and evaluation speech of about 1 hour for each of the languages. For Amharic and Tigrigna, lexical and language models of different vocabulary size have been developed. For Oromo and Wolaytta, the training lexicons have been used for decoding. We achieved relative word error rate (WER) reductions for all the languages by using Deep Neural Networks (DNN) based acoustic models that range from 15.1 to 31.45. The relative improvement obtained for Wolaytta speech recognition system is much higher (31.45) than the improvement achieved for the other languages. This attributes to the weaker language model and the bigger size of training speech we used for Wolaytta.