Looking Enhances Listening: Recovering Missing Speech Using Images
Tejas Srinivasan, Ramon Sanabria, Florian Metze
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 15:56
Speech is understood better by using visual context; for this reason, there have been many attempts to use images to adapt automatic speech recognition (ASR) systems. Current work, however, has shown that visually adapted ASR models only use images as a regularization signal, while completely ignoring their semantic content. In this paper, we present a set of experiments where we show the utility of the visual modality under noisy conditions. Our results show that multimodal ASR models can recover words which are masked in the input acoustic signal, by grounding its transcriptions using the visual representations. We observe that integrating visual context can result in up to 35% relative improvement in masked word recovery. These results demonstrate that end-to-end multimodal ASR systems can become more robust to the noise by leveraging the visual context.