Style-Conditioned Music Generation
Yu-Quan Lim, Chee Seng Chan, Fung Ying Loo
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 09:21
Recent works have shown success in generating music using a Variational Autoencoder (VAE). However, we found out that the style of the generated music is usually governed or limited by the training dataset. In this work, we proposed a new formulation to the VAE that allows users to condition on the style of the generated music. Technically, our VAE consists of two latent spaces - content and style space to encode the content and style of a song separately. Each style is represented by a continuous style embedding, unlike previous works which mostly used discrete or one-hot style labels. We trained our model on public datasets that made up of Bach chorales and western folk tunes. Empirically, as well as from music theory point of view, we show that our proposed model can generate better music samples of each style than a baseline model. The source code and generated samples are available at https://github.com/daQuincy/DeepMusicvStyle