Facial Feature Embedded Cyclegan For Vis-Nir Translation
Huijiao Wang, Haijian Zhang, Lei Yu, Li Wang, Xulei Yang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 13:36
Visible and near-infrared (VIS-NIR) face recognition remains a challenging task due to distinctions between spectral components of two modalities. Inspired by the CycleGAN, this paper presents a method aiming to translate between VIS and NIR face images. To achieve this, we propose a new facial feature embedded CycleGAN. Firstly, to learn the particular feature while preserving common facial representation between VIS and NIR domains, we employ a general facial feature extractor (FFE) to extract effective features. Herein the MobileFaceNet is pre-trained on a VIS face database and serves as the FFE. Secondly, the domain-invariant feature learning is enhanced by proposing a new pixel consistency loss. Lastly, we establish a new WHU VIS-NIR database including varies in face rotation and expressions to enrich the training data. Experimental results on the Oulu-CASIA and our WHU VIS-NIR databases show that the proposed FFE-based CycleGAN (FFE-CycleGAN) outperforms some state-of-the-art methods and achieves 96.5% accuracy.