Learning Non-Linear Disentangled Editing For Stylegan
Xu Yao, Alasdair Newson, Yann Gousseau, Pierre Hellier
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:39
Recent work has demonstrated the great potential of image editing in the latent space of powerful deep generative models such as StyleGAN. However, the success of such methods relies on the assumption that a linear hyperplane may separate the latent space into two subspaces for a binary attribute. In this work, we show that this hypothesis is a significant limitation and propose to learn a non-linear, regularized and identity-preserving latent space transformation that leads to more accurate and disentangled manipulations of facial attributes.