Audio-visual Speech Enhancement with a Deep Kalman Filter Generative Model
Ali Golmakani (Inria Nancy Grand ); Mostafa Sadeghi (INRIA); romain serizel (Université de Lorraine)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Deep latent variable generative models based on variational autoencoder (VAE) have shown promising performance for audio-visual speech enhancement (AVSE). The underlying idea is to learn a VAE-based audio-visual prior distribution for clean speech data, and then combine it with a statistical noise model to recover a speech signal from a noisy audio recording and video (lip images) of the target speaker. Existing generative models developed for AVSE do not take into account the sequential nature of speech data, which prevents them from fully incorporating the power of visual data. In this paper, we present an audio-visual deep Kalman filter (AV-DKF) generative model which assumes a first-order Markov chain model for the latent variables and effectively fuses audio-visual data. Moreover, we develop an efficient inference methodology to estimate speech signals at test time. We conduct a set of experiments to compare different variants of generative models for speech enhancement. The results demonstrate the superiority of the AV-DKF model compared to both its audio-only version, and the non-sequential audio-only and audio-visual VAE-based models.