Adversarial Mixup Synthesis Training For Unsupervised Domain Adaptation
Yuhua Tang, Haotian Wang, Zhipeng Lin, Liyang Xu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 07:22
Domain adversarial training is a popular approach for Unsupervised Domain Adaptation~(DA). However, the transferability of adversarial training framework may drop greatly on the adaptation tasks with a large distribution divergence between source and target domains. In this paper, we propose a new approach termed Adversarial Mixup Synthesis Training~(AMST) to alleviate the issue. The AMST augments the training with synthesis samples by linearly interpolating between pairs of hidden representations and their domain labels. By this means, AMST encourages the model to make consistency domain prediction less confidently on interpolations points, which learn domain-specific representations with fewer directions of variance. Based on the previous work, we conduct a theoretical analysis on this phenomenon under ideal conditions and show that AMST could improve generalization ability. Finally, experiments on benchmark dataset demonstrate the effectiveness and practicability of AMST. We will publicly release our code on github soon.