On the Effectiveness of Two-Step Learning for Latent-Variable Models
Cem Subakan,Maxime Gasse,Laurent Charlin
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:57
Latent-variable generative models offer a principled solution for modeling and sampling from complex probability distributions. Implementing a joint training objective with a complex prior, however, can be a tedious task, as one is typically required to derive and code a specific cost function for each new type of prior distribution. In this work, we propose a general framework for learning latent variable generative models in a two-step fashion. In the first step of the framework, we train an autoencoder, and in the second step we fit a prior model on the resulting latent distribution. This two-step approach offers a convenient alternative to joint training, as it allows for a straightforward combination of existing models without the hustle of deriving new cost functions, and the need for coding the joint training objectives. We demonstrate that two-step learning results in performances similar to joint training, and in some cases even results in more accurate modeling.