JOURNAL ARTICLE

Smallcap: Lightweight Image Captioning Prompted with Retrieval Augmentation

Abstract

Recent advances in image captioning have focused on scaling the data and model size, substantially increasing the cost of pretraining and finetuning. As an alternative to large models, we present Smallcap, which generates a caption conditioned on an input image and related captions retrieved from a datastore. Our model is lightweight and fast to train, as the only learned parameters are in newly introduced cross-attention layers between a pre-trained CLIP encoder and GPT-2 decoder. Smallcap can transfer to new domains without additional finetuning and can exploit large-scale data in a training-free fashion since the contents of the datastore can be readily replaced. Our experiments show that Smallcap, trained only on COCO, has competitive performance on this benchmark, and also transfers to other domains without retraining, solely through retrieval from target-domain data. Further improvement is achieved through the training-free exploitation of diverse human-labeled and web data, which proves to be effective for a range of domains, including the nocaps benchmark, designed to test generalization to unseen visual concepts.11Code: https://github.com/RitaRamo/smallcap.

Keywords:
Closed captioning Computer science Image (mathematics) Image retrieval Artificial intelligence Computer vision Information retrieval Natural language processing

Metrics

94
Cited By
17.11
FWCI (Field Weighted Citation Impact)
70
Refs
0.99
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Human Pose and Action Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.