JOURNAL ARTICLE

A Context-Based Network For Referring Image Segmentation

Abstract

Referring image segmentation is an important task aiming at segmenting out the object referred by a natural language expression. Current works usually employ the methods of concatenating the visual and linguistic features. They underestimate the importance of language-to-vision and object-to-object relationships when the natural language expression has multiple entities. Therefore, we propose a new network named Context-Based Network(CBN) to improve the accuracy of locating the correct referent. The CBN is composed of two modules: Intra Relation Selection(Intra-RS) and Inter Relation Selection(Inter-RS). The Intra-RS can capture object-to-object relationships in an embedding visual and linguistic feature space and the Inter-RS uses the multi-scale linguistic features as a guide to match the most similar region from the image feature maps. Besides, we apply spatial pyramid pooling to get global information to solve the limited receptive field problem. Experimental results on four public datasets showed that CBN achieved comparable performance to the other state-of-art methods.

Keywords:
Computer science Artificial intelligence Object (grammar) Pooling Context (archaeology) Pyramid (geometry) Referent Relation (database) Segmentation Feature (linguistics) Natural language processing Natural language Pattern recognition (psychology) Embedding Image segmentation Computer vision Data mining Linguistics Mathematics Geography

Metrics

2
Cited By
0.21
FWCI (Field Weighted Citation Impact)
29
Refs
0.50
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.