Fsft-Net: Face Transfer Video Generation With Few-Shot Views
Luchuan Song, Guojun Yin, Bin Liu, Yuhui Zhang, Nenghai Yu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:59
To transfer head pose and expression with few photographs is a novel yet challenging task in deepfake generation. Despite impressive results have been achieved in related works, there are still two limitations in the existing methods: 1) most of the methods are based on computer graphics, which take a lot of computing resources,while lacking of generalization for different identity, 2) few-shot based methods cannot handle the few-shot style transfer video generation. To address these distortion problems, we propose a novel deep learning framework, named as Few Shot Face Transfer Networks(FSFT-Net) which works for the face transfer video generation. The proposed FSFT-Net driven by arbitrary portrait video involves a cascaded-based style generator to synthesize stable video with few free-view images. In addition, the frame and video discriminators are adopted for optimization of the proposed generator. The FSFT-Net performs long-term adversarial training on large-scale video datasets. Extensive experiments demonstrate that our FSFT-Net outperforms state-of-the-art methods both quantitatively and qualitatively results.