Muse: Multi-Modal Target Speaker Extraction With Visual Cues
Zexu Pan, Ruijie Tao, Chenglin Xu, Haizhou Li
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:34
Speaker extraction algorithm relies on the speech sample from the target speaker as the reference point to focus its attention. Such a reference speech is typically pre-recorded. On the other hand, the temporal synchronization between speech and lip movement also serves as an informative cue. Motivated by this idea, we study a novel technique to use speech-lip visual cues to extract reference target speech directly from mixture speech during inference time, without the need of pre-recorded reference speech. We propose a multi-modal speaker extraction network, named MuSE, that is conditioned only on a lip image sequence. MuSE not only outperforms other competitive baselines in terms of SI-SDR and PESQ, but also shows consistent improvement in cross-dataset evaluations.
Chairs:
Chandan K A Reddy