Can Knowledge of End-to-End Text-to-Speech Models Improve Neural MIDI-to-Audio Synthesis Systems?
Xuan Shi (University of Southern California); Erica Cooper (); Xin Wang (National Institute of Informatics); Junichi Yamagishi (National Institute of Informatics); Shrikanth Narayanan (USC)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
With the similarity between music and speech synthesis from symbolic input and the rapid development of text-to-speech (TTS) techniques, it is worthwhile to explore ways to improve the MIDI-to-audio performance by borrowing from TTS techniques. In this study, we analyze the shortcomings of TTS-based MIDI-to-audio system and improve it in terms of feature computation, model selection, and training strategy, aiming to synthesize highly natural-sounding audio. Moreover, we conducted an extensive model evaluation through listening tests, pitch measurement, and spectrogram analysis. This work demonstrates not only synthesis of highly natural music but offers a thorough analytical approach and useful outcomes for the community. Our code and pre-trained model are open sourced at https://github.com/nii-yamagishilab/midi-to-audio.