MUTUALLY SUPERVISED LEARNING VIA INTERACTIVE CONSISTENCY FOR GEOGRAPHIC OBJECT SEGMENTATION FROM WEAKLY LABELED REMOTE SENSING IMAGERY
Yanan Liu, Libao Zhang
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Geographic object segmentation from weakly annotated remote sensing images has become a research hotspot, since it can greatly reduce the costly annotation burden. Recently, it has made remarkable progress by dividing it into two sequential steps, which first produces pseudo labels (PLs) from a localization model, then uses PLs to train a segmentation network for final results. The one-way knowledge transfer in the above schemes, however, lacks the feedback from the segmentation to localization model which may result in suboptimal performance. In this paper, we develop a mutually supervised learning (MSL) framework for geographic object segmentation under image-wise annotations. First, MSL learns the localization and segmentation model concurrently and employs the output from each of the two models as pseudo supervision for the other one by formulating an interactive consistency loss, which encourages each model to provide positive feedback and guidance to the other. Then, a variance-based uncertainty estimation strategy is introduced to explicitly approximate the uncertainty of the PLs, which helps to alleviate the detrimental effect caused by learning from noisy PLs. Finally, we design a multi-scale activation integration-based localization model to produce high-quality localization maps. Comprehensive evaluations and ablation studies validate the superiority of the MSL framework.