JOURNAL ARTICLE

Saliency-Guided Complementary Attention for Improved Few-Shot Learning

Abstract

Despite significant progress in recent deep neural networks, most deep learning algorithms rely heavily on abundant training samples. To address this problem, we propose an effective and interpretable few-shot classification model using Saliency-Guided Complementary Attention (SGCA), which aims to learn transferable representations and to build a robust classification module simultaneously. Concretely, we propose to train our feature extractor using an auxiliary task to separate object regions from background clutter guided by saliency detection signals. In addition, to make the separation beneficial to the downstream tasks, we introduce a complementary attention mechanism to force the classification module to focus on various informative parts of the image. Extensive experiments on few-shot learning tasks demonstrate the effectiveness of our proposed method, e.g., we achieve 68.81% and 84.60% for 5-way 1-shot and 5-shot settings on mini-ImageNet, respectively.

Keywords:
Computer science Artificial intelligence Extractor Clutter Focus (optics) Shot (pellet) Feature extraction Feature (linguistics) Pattern recognition (psychology) Machine learning Object detection Task (project management) Contextual image classification Deep learning One shot Single shot Image (mathematics) Radar

Metrics

5
Cited By
0.71
FWCI (Field Weighted Citation Impact)
42
Refs
0.76
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.