Embedding a differentiable mel-cepstral synthesis filter to a neural speech synthesis system
Takenori Yoshimura (Nagoya Institute of Technology); Shinji Takaki (Nagoya Institute of Technology); Kazuhiro Nakamura (Techno-Speech, Inc.); Keiichiro Oura (Techno-Speech, Inc.); Yukiya Hono (Nagoya Institute of Technology); Kei Hashimoto (Nagoya Institute of Technology); Yoshihiko Nankaku (Nagoya Institute of Technology); Keiichi Tokuda (Department of Computer Science and Engineering, Nagoya Institute of Technology)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
This paper integrates a classic mel-cepstral synthesis filter into a modern neural speech synthesis system towards end-to-end controllable speech synthesis. Since the mel-cepstral synthesis filter is explicitly embedded in neural waveform models in the proposed system, both voice characteristics and the pitch of synthesized speech are highly controlled via a frequency warping parameter and fundamental frequency, respectively. We implement the mel-cepstral synthesis filter as a differentiable and GPU-friendly module to enable the acoustic and waveform models in the proposed system to be simultaneously optimized in an end-to-end manner. Experiments show that the proposed system improves speech quality from a baseline system maintaining controllability. The core PyTorch modules used in the experiments are publicly available on GitHub.