ive dialogue summarization has recently been receiving more attention. We propose a coarse-to-fine model for generating abstractive dialogue summaries, and introduce a fact-aware reinforcement learning (RL) objective that improves the fact consistency between the dialogue and the generated summary. Initially, the model generates the predicate-argument spans of the dialogue, and then generates the final summary through a fact-aware RL objective. Extensive experiments and analysis on two benchmark datasets demonstrate that our proposed method effectively improves the quality of the generated summary, especially in coherence and consistency.
Changmeng ZhengYi CaiGuanjie ZhangQing Li
Changmeng ZhengYi CaiGuanjie ZhangQing Li
Changmeng ZhengYi CaiGuanjie ZhangQing Li