EXPRESSION-AWARE FACE RECONSTRUCTION VIA A DUAL-STREAM NETWORK
Xiaoyu Chai, Jun Chen, Chao Liang, Dongshu Xu, Chia-Wen Lin
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 11:54
Recently, 3D face reconstruction from a single image has achieved promising results by adopting the 3D Morphable Model (3DMM). However, face images in-the-wild have various expressions, it is difficult for 3DMM to handle diverse facial expressions with a large range of variations, due to the limited expressive ability of its linear model, thereby resulting in distortion and ambiguity on facial local regions. To tackle this issue, we present a novel dual-stream network to deal with expression variations. Specifically, in the geometry stream, we propose novel Attribute Spatial Maps to record the spatial information of facial identity and expression attributes in the 2D image space separately. This avoids the interaction between the two attributes, thus keeping the identity information and further improving the ability to cope with expression changes. In the texture stream, we utilize the 3DMM albedo map to a style transfer based method for synthesizing facial appearance, which results in expression-irrelevant as well as realistic face textures. Both quantitative and qualitative evaluations on public datasets demonstrate the ability of our approach to achieve comparable results in face reconstructions under expression variations.