Litesing: Towards Fast, Lightweight And Expressive Singing Voice Synthesis
Xiaobin Zhuang, Tao Jiang, Szu-Yu Chou, Bin Wu, Peng Hu, Simon Lui
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:53
LiteSing proposed in this paper is a high-quality singing voice synthesis (SVS) system, which is fast, lightweight and expressive. This model mainly stacks several non-autoregressive WaveNet blocks in the encoder and decoder under a generative adversarial architecture, predicts expressive conditions from the musical score, and generates acoustic features from full conditions. The full conditions used in this model consist of spectrogram energy, voiced/unvoiced (V/UV) decision and dynamic pitch curve, which are proved related to the expressiveness. We predict the pitch and the timbre features respectively, avoiding the interdependence between these two features. Instead of neural network vocoders, a parametric WORLD vocoder is employed in the end for the pitch curve consistency. Experiment results show that LiteSing outperforms the baseline model using feed-forward Transformer by 1.386 times faster on inference speed, 15 times smaller on training parameters number, and almost the same MOS on sound quality. Through an A/B test, LiteSing achieves 67.3% preference rate over baseline in expressiveness, which suggests the advantage of LiteSing over the other compared models.
Chairs:
Erica Cooper