Exocentric To Egocentric Image Generation Via Parallel Generative Adversarial Network
Gaowen Liu, Hao Tang, Hugo Latapie, Yan Yan
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 15:01
Cross-view image generation has been recently proposed to generate images of one view from another dramatically different view. In this paper we investigate exocentric (third-person) view to egocentric (first-person) view image generation. This is a challenging task since egocentric view sometimes is remarkably different from exocentric view. Thus, transforming the appearances across the two views is a non-trivial task. Particularly, we propose a novel Parallel Generative Adversarial Network (P-GAN) with a novel cross-cycle loss to learn the shared information to generate egocentric images from exocentric view. We also incorporate a novel contextual feature loss in the learning procedure to capture the contextual information in images. Extensive experiments on Exo-Ego datasets show that our model outperforms the state-of-the-art approaches.