Abstract

Automated text summarization is now more important than ever due to the exponential rise of internet text data. It takes time to sift through large amounts of content, especially when manually creating accurate summaries of big publications. Automating the summarization process is crucial to increase the effectiveness of machine learning models. There are two ways to create summaries: abstractive summarization, which includes analyzing the original text to create a summary, and extractive summarization, which chooses pertinent sentences from the original text. The T5, GPT2 and BART pre-trained transformer models for abstractive news summarization will be compared in this study. For our investigation, we have used the CNN/Dailymail dataset, which includes summaries created by humans for assessing and contrasting the summaries produced by various models. In order to better understand how transformer models perform for text summarization tasks, we analyze which model performs better for abstractive news summarization with Fine Tuning.

Keywords:
Automatic summarization Computer science Transformer Natural language processing Artificial intelligence Text graph Information retrieval Multi-document summarization The Internet World Wide Web

Metrics

5
Cited By
1.28
FWCI (Field Weighted Citation Impact)
26
Refs
0.81
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Text Analysis Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

BOOK-CHAPTER

Fine-Tuning BART for Abstractive Reviews Summarization

Hemant YadavNehal PatelDishank Jani

Lecture notes in electrical engineering Year: 2023 Pages: 375-385
JOURNAL ARTICLE

Fine Tuning an AraT5 Transformer for Arabic Abstractive Summarization

Yasmin EiniehAmal AlmansourAmani Jamal

Journal:   2022 14th International Conference on Computational Intelligence and Communication Networks (CICN) Year: 2022 Pages: 194-198
BOOK-CHAPTER

Leveraging Parameter-Efficient Fine-Tuning for Multilingual Abstractive Summarization

Jialun ShenYusong Wang

Lecture notes in computer science Year: 2024 Pages: 293-303
© 2026 ScienceGate Book Chapters — All rights reserved.