Skip to main content

MCKD: Mutually Collaborative Knowledge Distillation for Federated Domain Adaptation and Generalization

Ziwei Niu (Zhejiang University); Hongyi Wang (Zhejiang University); Hao Sun (Zhejiang University); Shuyi Ouyang (Zhejiang University); Yen-Wei Chen (Ritsumeikan University); Lanfen Lin (Zhejiang University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

Conventional unsupervised domain adaptation (UDA) and domain generalization (DG) methods rely on the assumption that all source domains can be directly accessed and combined for model training. However, this centralized training strategy may violate privacy policies in many real-world applications. A paradigm for tackling this problem is to train multiple local models and aggregate a generalized central model without data sharing. Recent methods have made remarkable advancements in this paradigm by exploiting parameter alignment and aggregation. But when sources domain variety increases, directly aligning and aggregating local parameters becomes more challenging. Adapting a different approach in this work, we devised a data-free semantic collaborative distillation strategy to learn domain-invariant representation for both federated UDA and DG. Each local model transmits its predictions to the central server and derives its target distribution from the average of other local models’ distributions to facilitate the mutual transfer of domain-specific knowledge. When unlabeled target data is available, we introduce a novel UDA strategy termed knowledge filter to adapt the central model to the target data. Extensive experiments on four UDA and DG datasets demonstrate that our method has a competitive performance compared with the state-of-the-art methods.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00