Skip to main content

ATTENTION-BASED ADVERSARIAL PARTIAL DOMAIN ADAPTATION

Mengzhu Wang, Xiong Peng, Wei Yu, Zhigang Luo, Shan An, Xiao Luo, Junyang Chen

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:04:46
08 May 2022

With the rapid development of vision-based deep learning (DL), it is an effective method to generate large-scale synthetic data to supplement real data to train the DL models for domain adaptation. However, previous vanilla domain adaptation methods generally assume the same label space, such an assumption is no longer valid for a more realistic scenario where it requires adaptation from a larger and more diverse source domain to a smaller target domain with less number of classes. To handle this problem, we propose an attention-based adversarial partial domain adaptation (AADA). Specifically, we leverage adversarial domain adaptation to augment the target domain by using source domain, then we can readily turn this task into a vanilla domain adaptation. Meanwhile, to accurately focus on the transferable features, we apply attention-based method to train the adversarial networks to obtain better transferable semantic features. Experiments on four benchmarks demonstrate that the proposed method outperforms existing methods by a large margin, especially on the tough domain adaptation tasks, e.g. VisDA-2017.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00