Replacing Human Audio With Synthetic Audio For On-Device Unspoken Punctuation Prediction
Daria Soboleva, Ondrej Skopek, Márius Šajgalík
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:12:29
We present a novel multi-modal unspoken punctuation prediction system for the English language which combines acoustic and text features. We demonstrate for the first time, that by relying exclusively on synthetic data generated using a prosody-aware text-to-speech system, we can outperform a model trained with expensive human audio recordings on the unspoken punctuation prediction problem. Our model architecture is well suited for on-device use. This is achieved by leveraging hash-based embeddings of automatic speech recognition text output in conjunction with acoustic features as input to a quasi-recurrent neural network, keeping the model size small and latency low.
Chairs:
Thomas Drugman