Abstract

Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.

Keywords:
Computer science Generalization Artificial intelligence Context (archaeology) Modal Flexibility (engineering) Representation (politics) Machine learning Metric (unit) Natural language Adaptation (eye) Code (set theory) Natural language processing Programming language

Metrics

579
Cited By
105.36
FWCI (Field Weighted Citation Impact)
77
Refs
1.00
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Multi-modal prompt learning with bidirectional layer-wise prompt fusion

Haitao YinYumeng Zhao

Journal:   Information Fusion Year: 2025 Vol: 117 Pages: 102919-102919
JOURNAL ARTICLE

Prompt Learning for Multi-modal COVID-19 Diagnosis

Yang YuRong LuMengyao WangMin HuangYazhou ZhangYijie Ding

Journal:   2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) Year: 2022 Pages: 2803-2807
JOURNAL ARTICLE

LAMM: Label Alignment for Multi-Modal Prompt Learning

Jingsheng GaoJiacheng RuanSuncheng XiangZefang YuKe JiMingye XieTing LiuYuzhuo Fu

Journal:   Proceedings of the AAAI Conference on Artificial Intelligence Year: 2024 Vol: 38 (3)Pages: 1815-1823
JOURNAL ARTICLE

MmAP: Multi-Modal Alignment Prompt for Cross-Domain Multi-Task Learning

Yi XinJunlong DuQiang WangKe YanShouhong Ding

Journal:   Proceedings of the AAAI Conference on Artificial Intelligence Year: 2024 Vol: 38 (14)Pages: 16076-16084
© 2026 ScienceGate Book Chapters — All rights reserved.