JOURNAL ARTICLE

A Context-Aware BERT Retrieval Framework Utilizing Abstractive Summarization

Abstract

Recently, the multi-stage reranking framework based on pre-trained language model BERT can significantly improve the ranking performance on information retrieval tasks. However, most of these BERT-based reranking frameworks independently process query-chunk pairs and ignore cross-passages interaction. The context information around each candidate passage is extremely important for relevance judgement. Existing relevance aggregation methods obtain context information through statistical method and lost part of semantic information. Therefore, to capture this cross-passages interaction, this paper proposes a context-aware BERT ranking framework that utilizing abstractive summarization to enhance text semantics. By utilizing PEGASUS to summarize both sides of candidate passage accurately and then concatenate them as the input sequence, BERT could acquire more semantic information under the limitation of the input sequence's length. The experimental results of two TREC data sets reveal the effectiveness of our proposed method in aggregating contextual semantic relevance.

Keywords:
Automatic summarization Computer science Ranking (information retrieval) Relevance (law) Information retrieval Context (archaeology) Semantics (computer science) Artificial intelligence Natural language processing Process (computing)

Metrics

2
Cited By
0.39
FWCI (Field Weighted Citation Impact)
47
Refs
0.64
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Information Retrieval and Search Behavior
Physical Sciences →  Computer Science →  Information Systems
Recommender Systems and Techniques
Physical Sciences →  Computer Science →  Information Systems

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.