Abstract

In this paper, we present a novel sequence-to-sequence architecture with multi-head attention for automatic summarization of long text. Summaries generated by previous abstractive methods have the problems of duplicate and missing original information commonly. To address these problems, we propose a multi-head attention summarization (MHAS) model, which uses multi-head attention mechanism to learn relevant information in different representation subspaces. The MHAS model can consider the previously predicted words when generating new words to avoid generating a summary of redundant repetition words. And it can learn the internal structure of the article by adding self-attention layer to the traditional encoder and decoder and make the model better preserve the original information. We also integrate the multi-head attention distribution into pointer network creatively to improve the performance of the model. Experiments are conducted on CNN/Daily Mail dataset, which is a long text English corpora. Experimental results show that our proposed model outperforms the previous extractive and abstractive models.

Keywords:
Automatic summarization Computer science Natural language processing Artificial intelligence Pointer (user interface) Encoder Representation (politics) Sequence (biology) Head (geology) Information retrieval

Metrics

21
Cited By
1.38
FWCI (Field Weighted Citation Impact)
47
Refs
0.85
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Text Analysis Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Optimized dynamic multi-head attention for abstractive text summarization

Shafiya MushtaqK. Veningston

Journal:   International Journal of Data Science and Analytics Year: 2025 Vol: 20 (8)Pages: 7233-7255
JOURNAL ARTICLE

Selective and Coverage Multi-head Attention for Abstractive Summarization

Xuwen ZhangGongshen Liu

Journal:   Journal of Physics Conference Series Year: 2020 Vol: 1453 (1)Pages: 012004-012004
© 2026 ScienceGate Book Chapters — All rights reserved.