JOURNAL ARTICLE

Improving Abstractive Summarization with Commonsense Knowledge

Pranav NairAnil Kumar Singh

Year: 2021 Journal:   Student Research Workshop .../Proceedings of the Student Research Workshop ... Pages: 135-143

Abstract

Large scale pretrained models have demonstrated strong performances on several natural language generation and understanding benchmarks.However, introducing commonsense into them to generate more realistic text remains a challenge.Inspired from previous work on commonsense knowledge generation and generative commonsense reasoning, we introduce two methods to add commonsense reasoning skills and knowledge into abstractive summarization models.Both methods beat the baseline on ROUGE scores, demonstrating the superiority of our models over the baseline.Human evaluation results suggest that summaries generated by our methods are more realistic and have fewer commonsensical errors.

Keywords:
Automatic summarization Commonsense knowledge Commonsense reasoning Computer science Artificial intelligence Natural language processing Baseline (sea) Language model Generative grammar Question answering Domain knowledge

Metrics

3
Cited By
0.42
FWCI (Field Weighted Citation Impact)
18
Refs
0.70
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Text Analysis Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.