We study knowledge-grounded dialogue generation with pre-trained language models. To leverage the redundant external knowledge under capacity constraint, we propose equipping response generation defined by a pre-trained language model with a knowledge selection module, and an unsupervised approach to jointly optimizing knowledge selection and response generation with unlabeled dialogues. Empirical results on two benchmarks indicate that our model can significantly outperform state-of-the-art methods in both automatic evaluation and human judgment.
Xueliang ZhaoWei WuCan XuChongyang TaoDongyan ZhaoRui Yan
Xueliang ZhaoWei WuCan XuChongyang TaoDongyan ZhaoRui Yan
Yanmeng WangWenge RongJianfei ZhangYuanxin OuyangZhang Xiong
Jifan YuXiaohan ZhangYifan XuXuanyu LeiXinyu GuanJing ZhangLei HouJuanzi LiJie Tang
Raphaël SourtyJosé G. MorenoFrançois-Paul ServantLynda Tamine