Skip to main content

MODIFY: Model-driven Face Stylization without Style Images

Yuhe Ding ( Institute of Automation, Chinese Academy of Sciences ); Jian Liang (CASIA); Jie Cao (Institute of Automation, Chinese Academy of Sciences); Aihua Zheng (Anhui University); Ran He (Institute of Automation, Chinese Academy of Sciences)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

Existing face stylization methods always acquire the presence of the target (style) domain during the translation process, which violates the privacy regulations and limits their applicability in real-world systems. To address this issue, we propose a new method called MODel-drIven Face stYlization (MODIFY), which relies on the generative model to bypass the dependence of the target images. Briefly, MODIFY first trains a generative model in the target domain and then translates a source input to the target domain via the provided style model. To preserve the multimodal style information, MODIFY further introduces an additional remapping network, mapping a known continuous distribution into the encoder’s embedding space. During translation in the source domain, MODIFY fine-tunes the encoder module within the target style-persevering model to capture the content of the source input as precise as possible. Our method is extremely simple and satisfies versatile training modes for face stylization, i.e., offline, online, and test-time training. Experimental results on several different datasets validate the effectiveness of MODIFY for unsupervised face stylization.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00