Developing a dialogue system that accomplishes specific tasks through natural conversations with humans is challenging. Currently, although the task-oriented dialogue system constructed by the traditional pipeline model or sequence-to-sequence model has achieved great success, it is still unable to efficiently utilize the external knowledge base to generate high quality dialogue responses. This paper proposes a Memory-to-Sequence framework that uses Memory Neural Network (MemNN) and Long Short Term Memory (LSTM) joint decoding, which can better capture the dependence between system response and knowledge base. Experiments on In-Car Assistant Dataset show that our model significantly outperforms the baseline model and attains the state-of-the-art performance on the two subtasks of the dataset.
Haoyang WenYijia LiuWanxiang CheLibo QinTing Liu
Lubao WangXinping ZhangJunhua WangYifan Zhao
Zheng ZhangMinlie HuangZhongzhou ZhaoFeng JiHaiqing ChenXiaoyan Zhu