Skip to main content

Learning how to learn domain-invariant parameters for domain generalization

Feng Hou (University of Chinese Academy of Sciences); Yao Zhang (Shanghai AI Lab); Yang Liu (Institute of Computing Technology, University of Chinese Academy of Sciences, Lenovo AI Lab); Jin Yuan (Southeast University); Cheng Zhong (Lenovo Research, AI Lab); Yang Zhang (Lenovo Ltd); zhongchao shi (lenovo company); Jianping Fan (Lenovo); Zhiqiang He (Lenovo Ltd.)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

Due to domain shift, deep neural networks (DNNs) usually fail to generalize well on unknown test data in practice. Domain generalization (DG) aims to overcome this issue by capturing domain-invariant representations from source domains. Motivated by the insight that only partial parameters of DNNs are optimized to extract domain-invariant representations, we expect a general model that is capable of well perceiving and emphatically updating such domain-invariant parameters. In this paper, we propose two modules of Domain Decoupling and Combination (DDC) and Domain-invariance-guided Backpropagation (DIGB), which can encourage such general model to focus on the parameters that have a unified optimization direction between pairs of contrastive samples. Our extensive experiments on two benchmarks have demonstrated that our proposed method has achieved state-of-the-art performance with strong generalization capability.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00