Skip to main content

SYGNET: A SVD-YOLO Based Ghostnet For Real-Time Driving Scene Parsing

Hewei Wang, Bolun Zhu, Yijie Li, Kaiwen Gong, Ziyuan Wen, Shaofan Wang, Soumyabrata Dev

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:08:32
03 Oct 2022

Makeup transfer aims at rendering the makeup style from a given reference image to a source image. Most existing works have achieved promising progress by disentangled representation. However, these methods do not consider the spatial distribution of makeup style, which inevitably change the makeup-irrelevant regions. To solve the problem, we introduce a novel feature space disentangling framework based on spatial attention mechanism for makeup transfer. in particular, we first utilize a single encoder to extract all the features of the image. Then we propose a learnable spatial semantic classifier to classify the extracted features into makeup-specific and makeup-irrelevant features. Finally, we complete makeup transfer by swapping the classified features. Experiments demonstrate that the makeup-specific features precisely signify the spatial distribution of makeup style. The superiority of our approach is well demonstrated by the experiment that it produces promising visual results and keeps those makeup-irrelevant regions unchanged.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00