Multi-hop question answering (QA) is a challenging task in natural language processing (NLP), which requires multi-step reasoning over the sentences from several passages and finding out the answer as well as the scattered evidence sentences. The existing QA models that are based on Graph Neural Network (GNN) have exhibited good performance, however, the advantages of GNN have not been brought into full play. In this paper, we incorporate an effective multi-hop attention mechanism into GNN to aggregate richer information from high-order nodes of the graph. In addition, when multiple tasks are jointly optimized, the performance of all tasks is usually unable to improve together. To address this problem, we design a novel answer-evidence contrastive learning loss, which encourages models to learn better shared representation and distinguish the evidence sentences from other confusing ones through answer-evidence similarity. Our experiments on HotpotQA dataset demonstrate that the proposed method achieves comparable results to the state-of-the-art models and helps the baseline model gain significant performance improvement.
Yao WangChunyu HuJian LiRui NingLusi LiDaniel Takabi
You HaoHeyan HuangYue HuYongxiu Xu
Guangtao WangRex YingJing HuangJure Leskovec