JOURNAL ARTICLE

Attention history-based attention for abstractive text summarization

Abstract

Recently, encoder-decoder model using attention has shown meaningful results in the abstractive summarization tasks. In the attention mechanism, the attention distribution is generated based only on the current decoder state. However, since there are patterns in the process of writing summaries, patterns will exist even in the process of paying attention. In this work, we propose the attention history-based attention model that considers such patterns of the attention history. We build an additional recurrent network, the attention reader network to model the attention patterns. Also, we employ an accumulation vector that keeps the total amount of effective attention to each part of the input text, which is guided by an additional network named the accumulation network. Both the attention reader network and the accumulation vector are used as the additional inputs to the attention mechanism. The evaluation results on the CNN/Daily Mail dataset show that our method better captures the attention pattern and achieves higher ROUGE scores than strong baselines.

Keywords:
Automatic summarization Computer science Attention network Encoder Mechanism (biology) Artificial intelligence Process (computing) Natural language processing Machine learning

Metrics

11
Cited By
1.03
FWCI (Field Weighted Citation Impact)
30
Refs
0.80
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Text and Document Classification Technologies
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Attention-based Transformer for Assamese Abstractive Text Summarization

Pritom Jyoti GoutomNomi BaruahParamananda Sonowal

Journal:   Procedia Computer Science Year: 2024 Vol: 235 Pages: 1097-1104
JOURNAL ARTICLE

Abstractive Text Summarization Using Attention-based Stacked LSTM

Mimansha SinghVrinda Yadav

Journal:   2022 Fifth International Conference on Computational Intelligence and Communication Technologies (CCICT) Year: 2022 Pages: 236-241
© 2026 ScienceGate Book Chapters — All rights reserved.