Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 03:35:48
07 Jun 2021

During a conversation, humans use both sight and hearing in order to focus on the speaker of interest . Despite this evidence, traditional speech enhancement and separation algorithms rely only on acoustic speech signals. Although the advances in deep learning allowed these algorithms to reach high performance, speech enhancement and separation systems still struggle in situations where the background noise level is high, limited by the use of a single modality. Therefore, recent works investigated the possibility of including visual information from the speaker of interest to perform speech enhancement and separation. In this tutorial, we will provide an overview of deep-learning-based techniques used for audio-visual speech enhancement and separation. Specifically, we will consider how the field evolved from the first single-microphone speaker-dependent systems to the current state of the art. In addition, several demos developed to showcase our research in the field will be shown. The tutorial is intended to highlight the potential of this emergent research topic with two aims: helping beginners to navigate through the large number of approaches in the literature; inspiring experts by providing insights and perspectives on current challenges and possible future research directions.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00