OFF-THE-SHELF DEEP INTEGRATION FOR RESIDUAL-ECHO SUPPRESSION
Amir Ivry, Israel Cohen, Baruch Berdugo
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:11:49
Residual-echo suppression (RES) systems suppress the echo and preserve the speech from a mixture of the two. In hands-free speech communication, RES may also be addressed as a source separation (SS) or speech enhancement (SE) problem, where the echo can be manipulated as an interfering speech signal. In this study, we fine-tune three pre-trained deep learning-based systems originally designed for RES, SS, and SE, and show that the best performing system for the task of RES varies with respect to the acoustic conditions. Then, we propose a real-time data-driven integration of these systems, where a neural network continuously tracks the system that achieves the best performance during both single-talk and double-talk periods. Experiments with 100 h of real and synthetic data show that the integrated system outperforms each individual system in terms of echo suppression and speech distortion in various acoustic environments.