Jing YaoXiting WangJianxun LianXiaoyuan YiXing Xie
Explainability is critical for recommender systems to ensure good user experience and facilitate designers to debug. However, generating explanations in recommender systems usually requires large efforts due to the dependency on additional data and case-by-case model design. One possible solution to these challenges is reasoning with logic rules, whose validity or confidence can automatically indicate high-quality explanations and formats are general. However, pioneer methods can be hardly applied in recommendation due to the high sparsity of interaction data, which raises the difficulty in accurately computing the rule validity, and the specific ranking-oriented task. To bridge this gap, we propose a general framework for Reco mmendation with lo gic r ule reasoning ( Recolor ) that satisfies three desirable properties. First, we explicitly estimate the rule validity to ensure well-grounded decisions, where a fuzzy logic validity module is designed for accurate estimation on highly sparse recommendation data. Second, we ensure the generality for both the types of input data and model architectures by designing a neural logic generation module, which decouples the user–item representation learning from the rule construction. Third, we integrate the two above-mentioned modules with a ranking-oriented BPR loss and achieve a unified optimization of explainability and accuracy. For any given neural recommendation model, our proposed logic rule reasoning framework can upgrade it to a self-explainable version. Numerical experiments and user studies on four public recommendation datasets with different levels of sparsity demonstrate that our framework shows high-validity rule explanations, generality in architecture and data, and high recommendation accuracy.
Yaxin ZhuYikun XianZuohui FuGerard de MeloYongfeng Zhang
Forough ArabshahiJennifer LeeAntoine BosselutYejin ChoiTom M. Mitchell
Forough ArabshahiJennifer LeeAntoine BosselutYejin ChoiTom M. Mitchell