Parallel Waveform Synthesis Based On Generative Adversarial Networks With Voicing-Aware Conditional Discriminators
Ryuichi Yamamoto, Eunwoo Song, Min-Jae Hwang, Jae-Min Kim
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:12:56
This paper proposes voicing-aware conditional discriminators for Parallel WaveGAN-based waveform synthesis systems. In this framework, we adopt a projection-based conditioning method that can significantly improve the discriminator’s performance. Furthermore, the conventional discriminator is separated into two waveform discriminators for modeling voiced and unvoiced speech. As each discriminator learns the distinctive characteristics of the harmonic and noise components, respectively, the adversarial training process becomes more efficient, allowing the generator to produce more realistic speech waveforms. Subjective test results demonstrate the superiority of the proposed method over the conventional Parallel WaveGAN and WaveNet systems. In particular, our speaker-independently trained model within a FastSpeech 2 based text-to-speech framework achieves the mean opinion scores of 4.20, 4.18, 4.21, and 4.31 for four Japanese speakers, respectively.
Chairs:
Jiangyan Yi