Automated text summarization is now more important than ever due to the exponential rise of internet text data. It takes time to sift through large amounts of content, especially when manually creating accurate summaries of big publications. Automating the summarization process is crucial to increase the effectiveness of machine learning models. There are two ways to create summaries: abstractive summarization, which includes analyzing the original text to create a summary, and extractive summarization, which chooses pertinent sentences from the original text. The T5, GPT2 and BART pre-trained transformer models for abstractive news summarization will be compared in this study. For our investigation, we have used the CNN/Dailymail dataset, which includes summaries created by humans for assessing and contrasting the summaries produced by various models. In order to better understand how transformer models perform for text summarization tasks, we analyze which model performs better for abstractive news summarization with Fine Tuning.
Hemant YadavNehal PatelDishank Jani
Vlatka DavidovićSanda Martinčić-Ipšić
Yasmin EiniehAmal AlmansourAmani Jamal
Mram KahlaZijian Győző YangAttila Novák