Source-Aware Neural Speech Coding For Noisy Speech Compression
Haici Yang, Kai Zhen, Seungkwon Beack, Minje Kim
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:14:24
This paper introduces a novel neural network-based speech coding system that can handle noisy speech effectively. The proposed source-aware neural audio coding (SANAC) system harmonizes a deep autoencoder-based source separation model and a neural coding system, so that it can explicitly perform source separation and coding in the latent space. An added benefit of this system is that the codec can allocate different amount of bits to the underlying sources, so that the more important source sounds better in the decoded signal. We target the use case where the user on the receiver side cares the quality of the non-speech components in the speech communication, while the speech source still carries the most important information. Both objective and subjective evaluation tests show that SANAC can recover the original noisy speech in a better quality than the baseline neural audio coding system, which is with no source-aware coding mechanism.
Chairs:
Zeyu Jin