Transmask: A Compact And Fast Speech Separation Model Based On Transformer
Zining Zhang, Bingsheng He, Zhenjie Zhang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:09:33
Speech separation is an important problem in speech processing, which targets to separate and generate clean speech from a mixed audio containing speech from different speakers. Empowered by the deep learning technologies over sequence-to-sequence domain, recent neural speech separation models are now capable of generating highly clean speech audios. To make these models more practical by reducing the model size and inference time while maintaining high separation quality, we propose a new transformer-based speech separation approach, called TransMask. By fully unleashing the power of self-attention on long-term dependency reception, we demonstrate the size of TransMask is more than 60% smaller and the inference is more than 2 times faster than state-of-the-art solutions. TransMask fully utilizes the parallelism during inference, and achieves nearly linear inference time within reasonable input audio lengths. It also outperforms existing solutions on output speech audio quality, achieving SDR above 16 over Librimix benchmark.
Chairs:
Takuya Yoshioka