Explanations in conventional recommender systems have demonstrated benefits\nin helping the user understand the rationality of the recommendations and\nimproving the system's efficiency, transparency, and trustworthiness. In the\nconversational environment, multiple contextualized explanations need to be\ngenerated, which poses further challenges for explanations. To better measure\nexplainability in conversational recommender systems (CRS), we propose ten\nevaluation perspectives based on concepts from conventional recommender systems\ntogether with the characteristics of CRS. We assess five existing CRS benchmark\ndatasets using these metrics and observe the necessity of improving the\nexplanation quality of CRS. To achieve this, we conduct manual and automatic\napproaches to extend these dialogues and construct a new CRS dataset, namely\nExplainable Recommendation Dialogues (E-ReDial). It includes 756 dialogues with\nover 2,000 high-quality rewritten explanations. We compare two baseline\napproaches to perform explanation generation based on E-ReDial. Experimental\nresults suggest that models trained on E-ReDial can significantly improve\nexplainability while introducing knowledge into the models can further improve\nthe performance. GPT-3 in the in-context learning setting can generate more\nrealistic and diverse movie descriptions. In contrast, T5 training on E-ReDial\ncan better generate clear reasons for recommendations based on user\npreferences. E-ReDial is available at https://github.com/Superbooming/E-ReDial.\n
Konstantina ChristakopoulouFilip RadlinskiKatja Hofmann
Giuseppe CareniniJocelyin SmithDavid Poole
Amina SamihAmina AdadiMohammed Berrada
Igor TchappiJoris HulstijnEphraim Sinyabe PagouSukriti BhattacharyaAmro Najjar