Skip to main content

Speech Enhancement Autoencoder With Hierarchical Latent Structure

Koen Oostermeijer, Jun Du, Qing Wang, Chin-Hui Lee

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:07:37
10 Jun 2021

A new hierarchical convolutional neural network-based autoencoder architecture called SEHAE (Speech Enhancement Hierarchical AutoEncoder) is introduced, in which the latent representation is decomposed into several parts that correspond to different scales. The model consists of three functionally different components. First, a stack of encoders generates a set of latent vectors that contain information from an increasingly larger receptive field. Second, the decoders construct the clean speech in a stage-wise and additive fashion, starting from a learned initial vector. The third component, which we call funnel networks, is tasked with “knitting” together the outputs of the previous decoder and the encoder to compute latent vectors for the next decoder. Several options for initial vectors are explored. Experiments show that SEHAE achieves significant improvements for the considered speech quality and intelligibility measures, outperforming a denoising autoencoder and other step-wise models. Furthermore, its internal workings are investigated using the intermediate results from the decoders.

Chairs:
Timo Gerkmann

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00