Skip to main content

Fully Convolutional Recurrent Networks For Speech Enhancement

Maximilian Strake, Bruno Defraene, Kristoff Fluyt, Wouter Tirry, Tim Fingscheidt

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:00
04 May 2020

Convolutional recurrent neural networks (CRNs) using convolutional encoder-decoder (CED) structures have shown promising performance for single-channel speech enhancement. These CRNs handle temporal modeling through integrating long short-term memory (LSTM) layers in between convolutional encoder and decoder. However, in such a CRN, the organization of internal representations in feature maps and the focus on local structure of the convolutional mappings has to be discarded for fully-connected LSTM processing. Furthermore, CRNs can be quite restricted concerning the feature space dimension at the input of the LSTM, which, through its fully-connected nature, requires a large amount of trainable parameters. As first novelty, we propose to replace the fully-connected LSTM by a convolutional LSTM (ConvLSTM) and call the resulting network a fully convolutional recurrent network (FCRN). Secondly, since the ConvLSTM retains the structured organization of its input feature maps, we can show that this helps to internally represent the harmonic structure of speech, allowing us to handle high-dimensional input features using less trainable parameters than an LSTM. The proposed FCRN clearly outperforms CRN reference models with similar amounts of trainable parameters in terms of PESQ, STOI, and segmental input-output SNR difference.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00