A Comparison Of Discrete Latent Variable Models For Speech Representation Learning
Henry Zhou, Alexei Baevski, Michael Auli
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:05:31
Neural latent variable models enable the discovery of interesting structure in speech audio data. This paper presents a comparison of two different approaches which are broadly based on predicting future time-steps or auto-encoding the input signal. Our study compares the representations learned by vq-vae and vq-wav2vec in terms of sub-word unit discovery and phoneme recognition performance. Results show that future time-step prediction with vq-wav2vec achieves better performance. The best system achieves an error rate of 13.22 on the ZeroSpeech 2019 ABX phoneme discrimination challenge.
Chairs:
Isabel Trancoso