JOURNAL ARTICLE

Controllable Abstractive Summarization Using Multilingual Pretrained Language Model

Abstract

By leveraging a multilingual language model, we show that CTRLSum [1], an abstractive summarization approach that can be controlled by keywords, improves baseline summarization system in four languages: English, Indonesian, Spanish, and French by 1.57 in terms of average ROUGE-1, with the Indonesian model achieving state-of-the-art results. We further provide novel analysis about the importance of keywords fed to CTRLSum which (1) shows hypothetical upper-bound results that outperform the state-of-the-art in all four languages by a large margin and (2) provides natural direction for future work to improve CTRLSum by improving the keyword prediction model.

Keywords:
Automatic summarization Computer science Margin (machine learning) Natural language processing Indonesian Artificial intelligence Baseline (sea) Natural language Language model Natural language generation Linguistics Machine learning

Metrics

1
Cited By
0.20
FWCI (Field Weighted Citation Impact)
0
Refs
0.55
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Text Analysis Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.