Skip to main content

Patnet : A Phoneme-Level Autoregressive Transformer Network For Speech Synthesis

Shiming Wang, Zhenhua Ling, Ruibo Fu, Jiangyan Yi, Jianhua Tao

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:08
08 Jun 2021

Aiming at efficiently predicting acoustic features with high naturalness and robustness, this paper proposes PATNet, a neural acoustic model for speech synthesis using phoneme-level autoregression. PATNet accepts phoneme sequences as input and is built based on Transformer structure. PATNet adopts a duration model instead of attention mechanism for sequence alignment. The decoder of PATNet predicts multi-frame spectra within one phoneme in parallel given the predicted spectra of previous phonemes. Such phoneme-level autoregression enables PATNet to achieve higher inference efficiency than the models with frame-level autoregression, such as Transformer-TTS, and improves the robustness of acoustic feature prediction by utilizing phoneme boundaries explicitly. Experimental results show that the speech synthesized by PATNet obtained lower character error rate (CER) than Tacotron, Transfomer-TTS and FastSpeech when evaluated by a speech recognition engine. Besides, PATNet achieved 10 times faster inference speed than Transformer-TTS and significantly better naturalness than FastSpeech.

Chairs:
Yu Zhang

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00