End To End Speech Recognition Error Prediction With Sequence To Sequence Learning
Prashant Serai, Adam Stiff, Eric Fosler-Lussier
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:16
Simulating the errors made by a speech recognizer on plain text has proven useful to help train downstream NLP tasks to be robust to real ASR errors at test time. Prior work in this domain has focused on modeling confusions at the phonetic level, and using a lexicon to convert from words to phones and back, usually accompanied by an FST Language model. We present a novel end to end model to simulate ASR errors. Our approach trains a convolutional sequence to sequence model to take as direct input a word sequence and predict a word sequence as an output. The end to end modeling improves prior published results for recall of recognition errors made by a Switchboard ASR system on unseen Fisher data; we also demonstrate cross-domain robustness by predicting errors made by an unrelated cloud-based ASR system on a Virtual Patient task.