Learning unlabeled samples from unseen categories given limited labeled data is a challenging problem. Existing few-shot learning methods fail to generate satisfactory feature representations due to tackling the informative and interference information without distinction. In this paper, we propose an attention-guided two-stream convolutional neural network (AGTSNet), which highlights the salient and discriminative features of the main object while alleviating the background interference to address this indiscriminate treatment. Comprehensive experiments are conducted on few-shot image classification with four standard benchmark datasets to demonstrate the effectiveness of our method.
Minghua ZhaoDong ShuangshuangJing HuShuangli DuShi ChengLi PengZhenghao Shi
Kangkang ZhaoZiyan ZhangBo JiangJin Tang
Xu ZhangYoujia ZhangZuyu Zhang