JOURNAL ARTICLE

MCKD: Mutually Collaborative Knowledge Distillation For Federated Domain Adaptation And Generalization

Abstract

Conventional unsupervised domain adaptation (UDA) and domain generalization (DG) methods rely on the assumption that all source domains can be directly accessed and combined for model training. However, this centralized training strategy may violate privacy policies in many real-world applications. A paradigm for tackling this problem is to train multiple local models and aggregate a generalized central model without data sharing. Recent methods have made remarkable advancements in this paradigm by exploiting parameter alignment and aggregation. But when sources domain variety increases, directly aligning and aggregating local parameters becomes more challenging. Adapting a different approach in this work, we devised a data-free semantic collaborative distillation strategy to learn domain-invariant representation for both federated UDA and DG. Each local model transmits its predictions to the central server and derives its target distribution from the average of other local models' distributions to facilitate the mutual transfer of domain-specific knowledge. When unlabeled target data is available, we introduce a novel UDA strategy termed knowledge filter to adapt the central model to the target data. Extensive experiments on four UDA and DG datasets demonstrate that our method has a competitive performance compared with the state-of-the-art methods.

Keywords:
Computer science Generalization Domain (mathematical analysis) Representation (politics) Filter (signal processing) Artificial intelligence Machine learning Distillation Data mining Distributed computing Theoretical computer science

Metrics

7
Cited By
1.79
FWCI (Field Weighted Citation Impact)
34
Refs
0.83
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

An Unsupervised Federated Domain Adaptation Method Based on Knowledge Distillation

Yunpeng XiaoYing GuoHaipeng ZhuChaolong JiaQian LiRong WangGuoyin Wang

Journal:   IEEE Transactions on Neural Networks and Learning Systems Year: 2024 Vol: 36 (6)Pages: 10993-11007
JOURNAL ARTICLE

Cross-domain knowledge distillation for domain adaptation with GCN-driven MLP generalization

Ba Hung NgoTae Jong Choi

Journal:   Applied Soft Computing Year: 2025 Vol: 184 Pages: 113771-113771
JOURNAL ARTICLE

Boosting domain generalization by domain-aware knowledge distillation

Zhongqiang ZhangGe LiuFuhan CaiDuo LiuXiangzhong Fang

Journal:   Knowledge-Based Systems Year: 2023 Vol: 280 Pages: 111021-111021
© 2026 ScienceGate Book Chapters — All rights reserved.