Skip to main content

Dual-Consistency Self-Training For Unsupervised Domain Adaptation

Jie Wang, Chaoliang Zhong, Cheng Feng, Jun Sun, Masaru Ide, Yasuto Yokota

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:05:42
21 Sep 2021

Unsupervised domain adaptation (UDA) is a challenging task characterized by unlabeled target data with domain discrepancy to labeled source data. Many methods have been proposed to learn domain invariant features by marginal distribution alignment, but they ignore the intrinsic structure within target domain, which may lead to insufficient or false alignment. Class-level alignment has been demonstrated to align the features of the same class between source and target domains. These methods rely extensively on the accuracy of predicted pseudo-labels for target data. Here, we develop a novel self-training method that focuses more on accurate pseudo-labels via a dual-consistency strategy involving modelling the intrinsic structure of the target domain. The proposed dual-consistency strategy first improves the accuracy of pseudo-labels through voting consistency, and then reduces the negative effects of incorrect predictions through structure consistency with the relationship of intrinsic structures across domains. Our method has achieved comparable performance to the state-of-the-arts on three standard UDA benchmarks.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00