FRCRN: Boosting Feature Representation using Frequency Recurrence for Monaural Speech Enhancement
Shengkui Zhao, Bin Ma, Karn N. Watcharasupat, Woon-Seng Gan
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:14:46
Convolutional recurrent networks (CRN) integrating a convolutional encoder-decoder (CED) structure and a recurrent structure have achieved promising performance for monaural speech enhancement. However, feature representation across frequency context is highly constrained due to limited receptive fields in the convolutions of CED. In this paper, we propose a convolutional recurrent encoder-decoder (CRED) structure to boost feature representation along frequency axis. The CRED applies frequency recurrence on 3D convolutional feature maps along frequency axis following each convolution. It is capable of catching long-range frequency correlations therefore enhancing feature representations of speech inputs. The proposed frequency recurrence is realized efficiently using a feedforward sequential memory network (FSMN). Besides the CRED, we insert two stacked FSMN layers between the encoder and the decoder to model further temporal dynamics. We name the proposed framework as Frequency Recurrent CRN (FRCRN). We design FRCRN to to predict complex Ideal Ratio Mask (cIRM) in complex-valued domain and optimize FRCRN with both time-frequency domain and time-domain losses. Our proposed approach achieves state-of-the-art performance on wideband benchmark datasets and ranked the 2nd place for the real-time fullband track in terms of Mean Opinion Score (MOS) and Word Accuracy (WAcc) in the ICASSP 2022 Deep Noise Suppression (DNS) challenge.