Dynamic Variational Autoencoders For Visual Process Modeling
Alexander Sagel, Hao Shen
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:46
This work studies the problem of modeling visual processes by leveraging deep generative architectures for learning linear, Gaussian representations from observed sequences. We propose a joint learning framework, combining a vector autoregressive model and a Variational Autoencoder. This results in an architecture that allows Variational Autoencoders to simultaneously learn a non-linear observation as well as a linear state model from sequences of frames. We validate our approach on synthesis of artificial sequences and dynamic textures. To this end, we use our architecture to learn a statistical model of each visual process, and generate a new sequence from each learned visual process model.