JOURNAL ARTICLE

Cross-Modal Self-Attention Network for Referring Image Segmentation

Abstract

We consider the problem of referring image segmentation. Given an input image and a natural language expression, the goal is to segment the object referred by the language expression in the image. Existing works in this area treat the language expression and the input image separately in their representations. They do not sufficiently capture long-range correlations between these two modalities. In this paper, we propose a cross-modal self-attention (CMSA) module that effectively captures the long-range dependencies between linguistic and visual features. Our model can adaptively focus on informative words in the referring expression and important regions in the input image. In addition, we propose a gated multi-level fusion module to selectively integrate self-attentive cross-modal features corresponding to different levels in the image. This module controls the information flow of features at different levels. We validate the proposed approach on four evaluation datasets. Our proposed approach consistently outperforms existing state-of-the-art methods.

Keywords:
Computer science Expression (computer science) Image (mathematics) Artificial intelligence Focus (optics) Modal Image segmentation Segmentation Modalities Object (grammar) Pattern recognition (psychology) Range (aeronautics) Natural language Computer vision Image fusion Natural language processing

Metrics

486
Cited By
20.63
FWCI (Field Weighted Citation Impact)
39
Refs
0.99
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

JOURNAL ARTICLE

Referring Segmentation in Images and Videos with Cross-Modal Self-Attention Network

Linwei YeMrigank RochanZhi LiuXiaoqin ZhangYang Wang

Journal:   IEEE Transactions on Pattern Analysis and Machine Intelligence Year: 2021 Vol: 44 (7)Pages: 1-1
JOURNAL ARTICLE

Referring Segmentation via Encoder-Fused Cross-Modal Attention Network

Guang FengLihe ZhangJiayu SunZhiwei HuHuchuan Lu

Journal:   IEEE Transactions on Pattern Analysis and Machine Intelligence Year: 2022 Vol: 45 (6)Pages: 7654-7667
JOURNAL ARTICLE

Cross-modal attention guided visual reasoning for referring image segmentation

Wenjing ZhangMengnan HuQuange TanQianli ZhouRong Wang

Journal:   Multimedia Tools and Applications Year: 2023 Vol: 82 (19)Pages: 28853-28872
JOURNAL ARTICLE

CMIRNet: Cross-Modal Interactive Reasoning Network for Referring Image Segmentation

Mingzhu XuTianxiang XiaoYutong LiuHaoyu TangYupeng HuLiqiang Nie

Journal:   IEEE Transactions on Circuits and Systems for Video Technology Year: 2024 Vol: 35 (4)Pages: 3234-3249
© 2026 ScienceGate Book Chapters — All rights reserved.