Hsegan: Hair Synthesis And Editing Using Structure-Adaptive Normalization On Generative Adversarial Network
Wanling Fan, Jiayuan Fan, Gang Yu, Bin Fu, Tao Chen
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:10:22
Human hair is a kind of special material with complex and varied high-frequency details. It is a challenging task to synthesize and edit realistic and fine-grained hair using deep learning methods. In this paper, we propose HSEGAN, a novel framework consisting of two condition modules encoding foreground hair and background respectively, followed by a hair synthesis generator that synthesizes the final result based on the encoded input. For the purpose of efficient and effective hair generation, we propose hair structure-adaptive normalization (HSAN) and use several HSAN residual blocks to build the hair synthesis generator. HSEGAN allows for explicit manipulation of hair at three different levels, including color, structure and shape. Extensive experiments on FFHQ dataset demonstrate our method can generate higher-quality hair images than state-of-the-art methods, yet consume less time in the inference stage.