Abstract

Copy module has been widely equipped in the recent abstractive summarization models, which facilitates the decoder to extract words from the source into the summary. Generally, the encoder-decoder attention is served as the copy distribution, while how to guarantee that important words in the source are copied remains a challenge. In this work, we propose a Transformer-based model to enhance the copy mechanism. Specifically, we identify the importance of each source word based on the degree centrality with a directed graph built by the self-attention layer in the Transformer. We use the centrality of each source word to guide the copy process explicitly. Experimental results show that the self-attention graph provides useful guidance for the copy distribution. Our proposed models significantly outperform the baseline methods on the CNN/Daily Mail dataset and the Gigaword dataset.

Keywords:
Automatic summarization Computer science Encoder Transformer Centrality Artificial intelligence Graph Natural language processing Attention network Theoretical computer science

Metrics

71
Cited By
9.11
FWCI (Field Weighted Citation Impact)
51
Refs
0.98
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Text Analysis Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

A Hierarchical Neural Abstractive Summarization with Self-Attention Mechanism

Weijun YangZhi-Cheng TangXinhuai Tang

Journal:   Proceedings of the 2018 3rd International Conference on Automation, Mechanical Control and Computational Engineering (AMCCE 2018) Year: 2018
JOURNAL ARTICLE

An abstractive text summarization technique using transformer model with self-attention mechanism

Sandeep KumarArun Solanki

Journal:   Neural Computing and Applications Year: 2023 Vol: 35 (25)Pages: 18603-18622
© 2026 ScienceGate Book Chapters — All rights reserved.