Xinyi ChangYadong SunYanjiang Wang
Existing few-shot classification methods will suffer severe performance degradation when there is a domain gap between the base classes data used for training and the new classes data used for the downstream task. For this reason, cross-domain few-shot learning (CD-FSL) is trying to solve this problem that exists in few-shot learning (FSL), which breaks the restrictive hypothesis that the base (source) classes and the novel (target) classes are sampled from the same domain in FSL. Due to the domain gap between the source and target domain data, the model trained on the source domain often has a deviation when transferred to the target domain, which leads to the model not being well generalized to the target task and produces the problem of inadaptation. To overcome this problem, a new target guided knowledge distillation (TGKD) framework is proposed in this work. We design two distinct knowledge distillation methods to enhance the generalization of the model and its pertinence to the target task in the teacher training and knowledge distillation stages, respectively. Specifically, in the teacher training phase, we propose a self-cross distillation to reduce intra-class semantic variations, thereby improving the robustness of the model's feature representation. In the knowledge distillation phase, a dual-student network is proposed to alleviate the bias of the model when migrating to the target data by utilizing a small portion of unlabeled target samples. Extensive experimental results on several benchmark datasets demonstrate the effectiveness of our method.
Jiale ChenFeng XuXin LyuZeng TaoXin LiXin Li
Xi YangDechen KongNannan WangXinbo Gao
Linhai ZhuoYuqian FuJingjing ChenYixin CaoYu–Gang Jiang
Jun LiangYing ZouYang PengYalong ChengRui LuoYishu LiuBingzhi Chen
Zhong JiJingwei NiXiyao LiuYanwei Pang