JOURNAL ARTICLE

Multi-hop Attention GNN with Answer-Evidence Contrastive Loss for Multi-hop QA

Abstract

Multi-hop question answering (QA) is a challenging task in natural language processing (NLP), which requires multi-step reasoning over the sentences from several passages and finding out the answer as well as the scattered evidence sentences. The existing QA models that are based on Graph Neural Network (GNN) have exhibited good performance, however, the advantages of GNN have not been brought into full play. In this paper, we incorporate an effective multi-hop attention mechanism into GNN to aggregate richer information from high-order nodes of the graph. In addition, when multiple tasks are jointly optimized, the performance of all tasks is usually unable to improve together. To address this problem, we design a novel answer-evidence contrastive learning loss, which encourages models to learn better shared representation and distinguish the evidence sentences from other confusing ones through answer-evidence similarity. Our experiments on HotpotQA dataset demonstrate that the proposed method achieves comparable results to the state-of-the-art models and helps the baseline model gain significant performance improvement.

Keywords:
Computer science Hop (telecommunications) Artificial intelligence Natural language processing Graph Task (project management) Performance improvement Attention network Information loss Question answering Machine learning Theoretical computer science

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
41
Refs
0.09
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Graph Neural Networks
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.