Skip to main content

Joint Phoneme Alignment And Text-Informed Speech Separation On Highly Corrupted Speech

Kilian Schulze-Forster, Clement S. J. Doire, Gaël Richard, Roland Badeau

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:04
04 May 2020

Speech separation quality can be improved by exploiting textual information. However, this usually requires text-to-speech alignment at phoneme level. Classical alignment methods are made for rather clean speech and do not work as well on corrupted speech. We propose to perform text-informed speech-music separation and phoneme alignment jointly using recurrent neural networks and the attention mechanism. We show that it leads to benefits for both tasks. In experiments, phoneme transcripts are used to improve the perceived quality of separated speech over a non-informed baseline. Moreover, our novel phoneme alignment method based on the attention mechanism achieves state-of-the-art alignment accuracy on clean and on heavily corrupted speech.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00