High-fidelity Portrait Editing via Exploring Differentiable Guided Sketches from the Latent Space
Chengrong Wang, Chenjie Cao, Yanwei Fu, Xiangyang Xue
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:58
This paper studies the task of sketch-guided high-fidelity portrait editing. Advanced unconditional generators, such as StyleGAN, can generate a high-quality portrait image with great diversity. In previous researches, StyleGAN has successfully been utilized for color-guided image editing through latent vector optimization. Nonetheless, passing sketch information to the generating model directly is non-trivial. To this end, we present an algorithm that addresses the problem of well controlling the generation process via differentiable guided sketches from latent space. Specifically, we re-purpose the classic operator -- eXtended difference-of-Gaussians (XDoG) that derives differentiable sketches from images. We also propose a multi-scale sketch loss assisted with which can finally guide the model follow the guidance sketch to generate. Extensive experiments validate the efficacy of our model in sketch-guided editing. We show that the quality of produced images is better than that of competitors.