Paper ID | ARS-2.2 | ||
Paper Title | SELF-GUIDED ADVERSARIAL LEARNING FOR DOMAIN ADAPTIVE SEMANTIC SEGMENTATION | ||
Authors | Yu-Ting Pang, Jui Chang, Chiou-Ting Hsu, National Tsing Hua University, Taiwan | ||
Session | ARS-2: Image and Video Segmentation | ||
Location | Area I | ||
Session Time: | Monday, 20 September, 15:30 - 17:00 | ||
Presentation Time: | Monday, 20 September, 15:30 - 17:00 | ||
Presentation | Poster | ||
Topic | Image and Video Analysis, Synthesis, and Retrieval: Image & Video Interpretation and Understanding | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Unsupervised domain adaptation has been introduced to generalize semantic segmentation models from labeled synthetic images to unlabeled real-world images. Although much effort was devoted to minimize the cross-domain gap, the segmentation results on real-world data remain highly unstable. In this paper, we discuss two main issues which hinder previous methods from achieving satisfactory results and propose a novel self-guided adversarial learning to leverage the capability of domain adaptation. Firstly, to deal with the unpredictable data variation in the real-world domain, we develop a self-guided adversarial learning method by selecting reliable target pixels as guidance to lead the adaptation of the other pixels. Secondly, to address the class-imbalanced issue, we devise the selection strategy in each class independently and incorporate this idea with a class-level adversarial learning in a unified framework. Experimental results show that the proposed method significantly improves the previous methods on several benchmark datasets. |