Skip to main content

Ripple Sparse Self-Attention For Monaural Speech Enhancement

Qiquan Zhang (The University of New South Wales); Hongxu Zhu (Department of Electrical and Computer Engineering, National University of Singapore); Qi Song (Alibaba); Xinyua Qian (Department of Electrical and Computer Engineering, National University of Singapore); Zhaoheng Ni (Meta AI); Haizhou Li (The Chinese University of Hong Kong, Shenzhen)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

The use of Transformer represents a recent success in speech en- hancement. However, as its core component, self-attention suffers from quadratic complexity, which is computationally prohibited for long speech recordings. Moreover, it allows each time frame to at- tend to all time frames, neglecting the strong local correlations of speech signals. This study presents a simple yet effective sparse self-attention for speech enhancement, called ripple attention, which simultaneously performs fine- and coarse-grained modeling for local and global dependencies, respectively. Specifically, we employ local band attention to enable each frame to attend to its closest neighbor frames in a window at fine granularity, while employing dilated at- tention outside the window to model the global dependencies at a coarse granularity. We evaluate the efficacy of our ripple attention for speech enhancement on two commonly used training objectives. Extensive experimental results consistently confirm the substantial superior performance of the ripple attention design over standard full self-attention, blockwise attention, and dual-path attention (SepFormer) in terms of speech quality and intelligibility.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00