Asr Is All You Need: Cross-Modal Distillation For Lip Reading
Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:24
The goal of this work is to train strong models for visual speech recognition without requiring human annotated ground truth data. We achieve this by distilling from an Automatic Speech Recognition (ASR) model that has been trained on a large-scale audio-only corpus. We use a cross-modal distillation method that combines CTC with a frame-wise cross-entropy loss. Our contributions are fourfold: (i) we show that ground truth transcriptions are not necessary to train a lip reading system; (ii) we exhibit how arbitrary amounts of unlabelled video data can be leveraged to improve performance; (iii) we demonstrate that distillation significantly speeds up training and, (iv) we obtain state-of-the-art results for training only on publicly available data, on the challenging LRS2 and LRS3 datasets.