Semantic-Aware Unpaired Image-To-Image Translation For Urban Scene Images
Zongyao Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:10:12
Unpaired image-to-image (I2I) translation methods have been developed for several years. Present methods do not take into consideration semantic information of the original image, which may perform well on simple datasets of uncomplicated scenes, however, fail in complex datasets of scenes involving abundant objects, such as urban scenes. To tackle this problem, in this paper, we reasonably modify the previous problem setting and present a novel semantic-aware method. Specifically, in training, we use additional semantic label maps of training images, while in the test, no labels are required. We originally adopt a semantic knowledge distillation strategy to acquire semantic information from the labels and construct a particular normalization layer to introduce semantic information. Being aware of the pixel-level semantic information, our method can realize better I2I translation than the previous methods. Experiments are conducted on benchmark datasets of urban scenes to validate the effectiveness of our method.
Chairs:
Soohyun Bae