Papez: Resource-efficient Speech Separation with Auditory Working Memory
Hyunseok Oh (Seoul National University); Juheon Yi (Seoul National University); Youngki Lee (Seoul National University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Transformer-based models recently reached state-of-the-art single-channel speech separation accuracy; However, their extreme computational load makes it difficult to deploy them in resource-constrained mobile or IoT devices. We thus present Papez, a lightweight and computation-efficient single-channel speech separation model. Papez is based on three key techniques. We first replace the inter-chunk Transformer with small-sized auditory working memory. Second, we adaptively prune the input tokens that do not need further processing. Finally, we reduce the number of parameters through the recurrent transformer. Our extensive evaluation shows that Papez achieves the best resource and accuracy tradeoffs with a large margin.