Skip to main content

LEARNING DISENTANGLED FEATURES FOR NERF-BASED FACE RECONSTRUCTION

Peizhi Yan, Rabab Ward, Dan Wang, Qiang Tang, Shan Du

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Poster 10 Oct 2023

The 3D-aware parametric face model named HeadNeRF achieved advantages in rendering photo-realistic face images. However, it has two limitations: (1) it uses single-image fitting reconstruction that is slow and prone to overfitting; (2) it lacks explicit 3D geometry information, making using semantic facial-parts-based loss challenging. This paper presents a 3D-aware face reconstruction learning framework tailored for HeadNeRF to address the limitations. We train a face encoder network that can directly learn the disentangled features for facial reconstruction to address the first limitation. For the second limitation, we introduce a lightweight semantic face segmentation network and facial-parts-based loss function to improve the reconstruction accuracy and quality. Our experiments show that the proposed method achieves a low reconstruction time consumption and enhanced reconstruction accuracy.