MULTI-CHANNEL SPEECH DENOISING FOR MACHINE EARS
Cong Han, Merve Kaya, Kyle Hoefer, Simon Carlile, Malcolm Slaney
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:10:25
This work describes a speech denoising system for machine ears that aims to improve speech intelligibility and the overall listening expe- rience in noisy environments. We recorded approximately 100 hours of audio data with reverberation and moderate environmental noise using a pair of microphone arrays placed around each of the two ears and then mixed sound recordings to simulate adverse acoustic scenes. Then, we trained a multi-channel speech denoising network (MCSDN) on the mixture of recordings. To improve the training, we employ an unsupervised method, complex angular central Gaussian mixture model (cACGMM), to acquire cleaner speech from noisy recordings to serve as the learning target. We propose a MCSDN-Beamforming-MCSDN framework in the inference stage. The re- sults of the subjective evaluation show that the cACGMM improves the training data, resulting in better noise reduction and user prefer- ence, and the entire system improves the intelligibility and listening experience in noisy situations.