JOURNAL ARTICLE

StyleDGPT: Stylized Response Generation with Pre-trained Language Models

Abstract

Generating responses following a desired style has great potentials to extend applications of open-domain dialogue systems, yet is refrained by lacking of parallel data for training. In this work, we explore the challenging task with pre-trained language models that have brought breakthrough to various natural language tasks. To this end, we introduce a KL loss and a style classifier to the fine-tuning step in order to steer response generation towards the target style in both a word-level and a sentence-level. Comprehensive empirical studies with two public datasets indicate that our model can significantly outperform state-of-the-art methods in terms of both style consistency and contextual coherence.

Keywords:
Computer science Language model Stylized fact Sentence Natural language processing Artificial intelligence Consistency (knowledge bases) Classifier (UML) Style (visual arts) Natural language Task (project management) Natural language understanding Machine learning Engineering

Metrics

15
Cited By
1.76
FWCI (Field Weighted Citation Impact)
68
Refs
0.87
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.