Self-Guided Adversarial Learning For Domain Adaptive Semantic Segmentation
Yu-Ting Pang, Jui Chang, Chiou-Ting Hsu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:14:34
Unsupervised domain adaptation has been introduced to generalize semantic segmentation models from labeled synthetic images to unlabeled real-world images. Although much effort was devoted to minimize the cross-domain gap, the segmentation results on real-world data remain highly unstable. In this paper, we discuss two main issues which hinder previous methods from achieving satisfactory results and propose a novel self-guided adversarial learning to leverage the capability of domain adaptation. Firstly, to deal with the unpredictable data variation in the real-world domain, we develop a self-guided adversarial learning method by selecting reliable target pixels as guidance to lead the adaptation of the other pixels. Secondly, to address the class-imbalanced issue, we devise the selection strategy in each class independently and incorporate this idea with a class-level adversarial learning in a unified framework. Experimental results show that the proposed method significantly improves the previous methods on several benchmark datasets.