Dian: Duration Informed Auto-Regressive Network For Voice Cloning
Wei Song, Xin Yuan, Zhengchen Zhang, Chao Zhang, Youzheng Wu, Xiaodong He, Bowen Zhou
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:10:08
In this paper, we propose a novel end-to-end speech synthesis approach, Duration Informed Auto-regressive Network (DIAN), which consists of an acoustic model and a separate duration model. Unlike other auto-regressive TTS methods, the duration information of phonemes is provided as part of the input to the acoustic model, which enables the removal of the attention mechanism between its encoder and decoder parts. This eliminates the common seen skipping and repeating issues and improves speech intelligibility while ensuring high speech quality. A Transformer-based duration model is used to predict the phoneme duration for the attention-free acoustic model. We developed our TTS systems for the M2VoC using the proposed DIAN approach. In our procedure, a multi-speaker attention-free acoustic model and its Transformer-based duration model are first separately trained based on the training data released by M2VoC. Next, the multi-speaker models are adapted to form the speaker-specific models with the speaker-dependent data and transfer learning. At last, a speaker-specific LPCNet is estimated and used to synthesize the speech of the corresponding speaker. The M2VoC results showed that our proposed approach achieved the 3rd-place in the speech quality ranking and the 4th-place in the speaker similarity and style similarity ranking in the Track1-a task.