The increasing demand for effective multi-document summarization (MDS) due to the rapid expansion of information availability has triggered this investigation. Focused on the fusion of Maximal Marginal Relevance (MMR) with pretrained models, we circumvented the often-cumbersome fine-tuning process, and introduced a novel methodology. We applied this to various text datasets like Multi-News and WCEP, and it resulted in significant improvement of ROUGE scores, illustrating promising results. For example, the ROUGE-1 score of our method on the Multi-News dataset was 46.23, and the ROUGE-1 score on the WCEP dataset was 30.74.In this discussion, the research implications, including better n-gram optimization and model selection strategies, were brought forward. This study had substantial implications for natural language processing, propelling us toward more advanced text summarization applications, presenting potential avenues for future research, such as refining the role of n-grams in summaries and optimizing model selection processes.
Yuning MaoYanru QuYiqing XieXiang RenJiawei Han
N. S. RanjithaJagadish S. Kallimani
Zhaolin ZENGXin YanBingbing YuFeng ZhouGuangyi XU