Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:10:30
08 May 2022

Under our previous work on frequency bin-wise independent processing, a dramatic reduction of the computational complexity for recurrent neural networks (RNN) is achieved. So that a massive deployment of RNN on time dimension is realized in this paper, by using the channel-wise long short-term memory neural network. Based on this approach, the processing of RNN on frequency dimension and time dimension in the time-frequency domain are unified. This allows us to combine convolutional neural network (CNN) and RNN as a basic neural operator, which finally leads to the Densely Connected Recurrent Convolutional Neural Network (DRC-NET). The DRC-NET sufficiently exploits the infinite response of RNN, and the finite response of convolutional neural networks (CNN). Its balanced response characteristics significantly improve the system performance. Experimental result shows that both non-causal and causal version of DRC-NET outperforms the state-of-the-art (STOA) model for speech dereverberation task.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00