Yunpeng XiaoYing GuoHaipeng ZhuChaolong JiaQian LiRong WangGuoyin Wang
Conventional unsupervised multi source domain adaptation (UMDA) methods are based on the assumption that all source domain data are accessible directly. Aiming at the problem that current UMDA methods cannot directly obtain source domain data in federated learning (FL), a knowledge distillation-based multisource domain adaptation method adapted to FL is proposed. First of all, considering that knowledge distillation allows learning solely through model access, this article adopts an improved voting mechanism by applying a smoothing technique to the confidence distribution in the source domain models. This reduces the influence of models with extreme high confidence, thereby extracting high-quality consensus knowledge. Second, this article designs a teacher model adaptive weighting strategy. It identifies irrelevant domains and malicious domains according to the similarity between consensus knowledge and the output of the teacher model. Then, it improves the robustness of the model for negative transfer. Finally, this article introduces the idea of contrastive learning. It can control the drift of a single source domain and bridge the deviation between the representation learned by the local model and the global model. Experiments show that the method proposed in this article is superior to the mainstream UMDA methods. Moreover, it is robust to negative transfer, which is suitable for many practical FL applications.
Minho RyuGeonseok LeeKichun Lee
Le Thanh Nguyen-MeidineÉric GrangerMadhu KiranJosé DolzLouis-Antoine Blais-Morin
Kong SunLin BoHuaijin RanZhi TangYuanliang Bi
Fang HuangZhijun FangZhicai SHILehui ZhuangXingchen LIBo Huang
Yuki TakashimaRyoichi TakashimaRyota TsunodaRyo AiharaTetsuya TakiguchiYasuo ArikiNobuaki Motoyama