Text summarization is to generate a brief version of a given article while maintaining its essential meaning. Most existing solutions typically relied on the standard attention-based encoder-decoder framework, where each token in the source article, including redundancy, would be contributed to the de-coder through the attention mechanism. It follows that how to filter out the redundant content becomes an important issue in the text summarization task. In this study, we propose a localness attention network, with simplicity and feasibility in mind, which circles different local regions in the source article as contributors in different decoding steps. To further strengthen the localness model, we share the semantic space of the encoder and decoder. The experimental results conducted on two benchmark datasets demonstrate the effectiveness and applicability of the proposed method in relation to several well-practiced works.
Nada Ali HakamiHanan A. Hosni Mahmoud
Zhong JiKailin XiongYanwei PangXuelong Li
Rashi BhansaliAnushka BhaveGauri BharatVedant MahajanM. L. Dhore
Sachin SolankiSuresh JainKailash Chandra Bandhu