JOURNAL ARTICLE

Improving reasoning with contrastive visual information for visual question answering

Long YuPengjie TangHanli WangJian Yu

Year: 2021 Journal:   Electronics Letters Vol: 57 (20)Pages: 758-760   Publisher: Institution of Engineering and Technology

Abstract

Abstract Visual Question Answering (VQA) aims to output a correct answer based on cross‐modality inputs including question and visual content. In general pipeline, information reasoning plays the key role for a reasonable answer. However, visual information is commonly not fully employed in many popular models nowadays. Facing this challenge, a new strategy is proposed in this work to make the best of visual information during reasoning. In detail, visual information is divided into two subsets: (1) question‐relevant visual set, and (2) question‐irrelevant visual set. Then, both of these two sets are employed by reasoning to generate reasonable outputs. Experiments are conducted on the benchmark VQAv2 dataset, which demonstrate the effectiveness of the proposed strategy. The project page can be found in https://mic.tongji.edu.cn/e6/8d/c9778a190093/page.htm .

Keywords:
Question answering Computer science Artificial intelligence Natural language processing

Metrics

6
Cited By
0.61
FWCI (Field Weighted Citation Impact)
15
Refs
0.68
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Human Pose and Action Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.