Semanticgan: Generative Adversarial Networks For Semantic Image To Photo-Realistic Image Translation
Junling Liu, Yuexian Zou, Dongming Yang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:57
Generative Adversarial Networks (GANs) have shown remarkable success in Semantic label map to Photo-realistic image Translation (S2PT) task. However, the results of the state-of-the-art approaches are often limited to blurriness and artifacts, and still far from realistic, since these methods lack effective semantic constrains to preserve the semantic information and ignore the structural correlations between the textures. To address those problems, we propose a SemanticGAN to synthesize high resolution image with fine details and realistic textures from the semantic label map. Specifically, we propose a Semantic Information Preserved Loss (SIPL) to maintain semantic information in the process of the generation via a segmentation model. Furthermore, we develop a novel generator to obtain the correlations between the image textures using newly-designed Correlated Residual Block (CRB). Experiments evaluated on Cityscapes dataset show that SemanticGAN outperforms many recent state-of-the-art methods in terms of qualitative and quantitative performance.