Compressing Deep Neural Networks For Efficient Speech Enhancement
Ke Tan, DeLiang Wang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:16:14
The use of deep neural networks (DNNs) has dramatically improved the performance of speech enhancement in the past decade. However, a large DNN is typically required to achieve strong enhancement performance, and this kind of model is both computationally intensive and memory consuming. Hence it is difficult to deploy such DNNs on devices with limited hardware resources or in applications with strict latency requirements. In order to address this problem, we propose a model compression pipeline to reduce DNN size for speech enhancement, which is based on three kinds of techniques: sparse regularization, iterative pruning and clustering-based quantization. Evaluation results show that our approach substantially reduces the sizes of different DNNs without significantly affecting their enhancement performance. Moreover, we find that training and compressing a large DNN yields higher STOI and PESQ than directly training a small DNN that has a comparable size to the compressed DNN. This further suggests the benefits of using the proposed model compression approach.
Chairs:
Anurag Kumar