Skip to main content

Style-Conditioned Music Generation

Yu-Quan Lim, Chee Seng Chan, Fung Ying Loo

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 09:21
09 Jul 2020

Recent works have shown success in generating music using a Variational Autoencoder (VAE). However, we found out that the style of the generated music is usually governed or limited by the training dataset. In this work, we proposed a new formulation to the VAE that allows users to condition on the style of the generated music. Technically, our VAE consists of two latent spaces - content and style space to encode the content and style of a song separately. Each style is represented by a continuous style embedding, unlike previous works which mostly used discrete or one-hot style labels. We trained our model on public datasets that made up of Bach chorales and western folk tunes. Empirically, as well as from music theory point of view, we show that our proposed model can generate better music samples of each style than a baseline model. The source code and generated samples are available at https://github.com/daQuincy/DeepMusicvStyle

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00