JOURNAL ARTICLE

Generalized Meta-FDMixup: Cross-Domain Few-Shot Learning Guided by Labeled Target Data

Yuqian FuYanwei FuJingjing ChenYu–Gang Jiang

Year: 2022 Journal:   IEEE Transactions on Image Processing Vol: 31 Pages: 7078-7090   Publisher: Institute of Electrical and Electronics Engineers

Abstract

The vanilla Few-shot Learning (FSL) learns to build a classifier for a new concept from one or very few target examples, with the general assumption that source and target classes are sampled from the same domain. Recently, the task of Cross-Domain Few-Shot Learning (CD-FSL) aims at tackling the FSL where there is a huge domain shift between the source and target datasets. Extensive efforts on CD-FSL have been made via either directly extending the meta-learning paradigm of vanilla FSL methods, or employing massive unlabeled target data to help learn models. In this paper, we notice that in the CD-FSL task, the few labeled target images have never been explicitly leveraged to inform the model in the training stage. However, such a labeled target example set is very important to bridge the huge domain gap. Critically, this paper advocates a more practical training scenario for CD-FSL. And our key insight is to utilize a few labeled target data to guide the learning of the CD-FSL model. Technically, we propose a novel Generalized Meta-learning based Feature-Disentangled Mixup network, namely GMeta-FDMixup. We make three key contributions of utilizing GMeta-FDMixup to address CD-FSL. Firstly, we present two mixup modules - mixup-P and mixup-M that help facilitate utilizing the unbalanced and disjoint source and target datasets. These two novel modules enable diverse image generation for training the model on the source domain. Secondly, to narrow the domain gap explicitly, we contribute a novel feature disentanglement module that learns to decouple the domain-irrelevant and domain-specific features. By stripping the domain-specific features, we alleviate the negative effects caused by the domain inductive bias. Finally, we repurpose a new contrastive learning module, dubbed ConL. ConL prevents the model from only capturing category-related features via introducing contrastive loss. Thus, the generalization ability on novel categories is improved. Extensive experimental results on two benchmarks show the superiority of our setting and the effectiveness of our method. Code and models will be released.

Keywords:
Computer science Artificial intelligence Domain (mathematical analysis) Pattern recognition (psychology) Computer vision Mathematics

Metrics

29
Cited By
5.68
FWCI (Field Weighted Citation Impact)
80
Refs
0.94
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
COVID-19 diagnosis using AI
Health Sciences →  Medicine →  Radiology, Nuclear Medicine and Imaging
Machine Learning and ELM
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

TGDM: Target Guided Dynamic Mixup for Cross-Domain Few-Shot Learning

Linhai ZhuoYuqian FuJingjing ChenYixin CaoYu–Gang Jiang

Journal:   Proceedings of the 30th ACM International Conference on Multimedia Year: 2022 Pages: 6368-6376
JOURNAL ARTICLE

Domain-Agnostic Meta-Learning for Cross-Domain Few-Shot Classification

Wei‐Yu LeeJheng-Yu WangYu-Chiang Frank Wang

Journal:   ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) Year: 2022 Pages: 1715-1719
© 2026 ScienceGate Book Chapters — All rights reserved.