Yong ZhangJinzhi LiaoJiuyang TangWeidong XiaoYuheng Wang
Neural network has provided an efficient approach for extractive document summarization, which means selecting sentences from the text to form the summary. However, there are two shortcomings about the conventional methods: they directly extract summary from the whole document which contains huge redundancy, and they neglect relations between abstraction and the document. The paper proposes TSERNN, a two-stage structure, the first of which is a key-sentence extraction, followed by the Recurrent Neural Network-based model to handle the extractive summarization of documents. In the extraction phase, it conceives a hybrid sentence similarity measure by combining sentence vector and Levenshtein distance, and integrates it into graph model to extract key sentences. In the second phase, it constructs GRU as basic blocks, and put the representation of entire document based on LDA as a feature to support summarization. Finally, the model is tested on CNN/Daily Mail corpus, and experimental results verify the accuracy and validity of the proposed method.
Zhaolin ZENGXin YanBingbing YuFeng ZhouGuangyi XU