Deep Audio-Visual Speech Separation With Attention Mechanism
Chenda Li, Yanmin Qian
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:47
Previous work shows that audio-visual fusion is a practical approach to deal with the speech separation task in the cocktail party problem. In this paper, we explore a better strategy to utilize visual representations with the attention mechanism. Compared to the previous baseline only using one visual stream of the target speaker, both speaker-dependent visual streams in the mixed audio are fed into the model, and it also predicts two separated speech streams simultaneously. To further enhance the performance, the attention mechanism is designed on the audio-visual speech separation architecture. The results show that the proposed approach works well in audio-visual speech separation. Our best model achieves an obvious and consistent improvement in speech separation when compared to the traditional method only using the target speaker visual stream.