REAL-TIME MULTICHANNEL SPEECH SEPARATION AND ENHANCEMENT USING A BEAMSPACE-DOMAIN-BASED LIGHTWEIGHT CNN
Marco Olivieri (Politecnico di Milano); Luca Comanducci (Politecnico di Milano); Mirco Pezzoli (Politecnicno di Milano); Davide Balsarri (BdSound); Luca Menescardi (BdSound); Michele Buccoli (BdSound S.r.l.); Simone Pecorino (BdSound); Antonio Grosso (BdSound); Fabio Antonacci (Politecnico di Milano); Augusto Sarti (Politecnico di Milano)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
The problems of speech separation and enhancement concern the extraction of the speech emitted by a target speaker when placed in a scenario where multiple interfering speakers or noise are present, respectively. A plethora of practical applications such as home assistants and teleconferencing require some sort of speech separation and enhancement pre-processing before applying Automatic Speech Recognition (ASR) systems. In the recent years, most techniques have focused on the application of deep learning to either time-frequency or time-domain representations of the input audio signals. In this paper we propose a real-time multichannel speech separation and enhancement technique, which is based on the combination of a directional representation of the sound field, denoted as beamspace, with a lightweight Convolutional Neural Network (CNN). We consider the case where the Direction-Of-Arrival (DOA) of the target speaker is approximately known, a scenario where the power of the beamspace-based representation can be fully exploited, while we make no assumption regarding the identity of the talker. We present experiments where the model is trained on simulated data and tested on real recordings and we compare the proposed method with a similar state-of-the-art technique.