Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Lecture 11 Oct 2023

In this paper, we present a portrait-to-anime framework for the application of portrait translation. Our work focuses on synthesizing anime figures that adhere to the style of a given reference anime. The existing methods often encounter challenges in transferring the depth texture information from the reference anime faces, resulting in notable artifacts and distortions in the generated facial shapes. To address these issues, we propose Hi-Res ACG, a dedicated series of model designs tailored specifically for anime portraits. This framework enables the synthesis of high-quality anime faces. Unlike previous GAN-based approaches that primarily rely on color and global semantics for image translation, our method incorporates a multi-head attention high-level semantic module to assist in the translation process. Additionally, we introduce a novel surface texture reconstruction encoder within the U-Net structure, which effectively enhances the texture quality and generates visually excellent anime portraits. Through extensive experimentation, we demonstrate that Hi-Res ACG achieves impressive performance in reconstructing images in the portrait-to-anime task. Our approach exhibits a general improvement in image translation tasks, as evidenced by comparisons with other state-of-the-art models on classical tasks such as portrait-to-anime, horse-to-zebra, and cat-to-dog.