Zitao XuShu ChenWeike PanZhong Ming
Cross-domain sequential recommendation aims to alleviate the sparsity problem while capturing users’ sequential preferences. However, most existing methods learn the user preferences in each domain separately, and then perform knowledge transfer between them to associate two separated domains, which neglects the item transition patterns across sequences from different domains. Moreover, the sparsity problem still exists since some items in the target and source domains are interacted with only a limited number of times. To address these issues, in this paper we propose a generic framework named multi-view graph contrastive learning (MGCL). Specifically, we tackle the problem from the perspective of an intra-domain item representation view and an inter-domain user preference view. From the former view, we adopt the contrastive mechanism to jointly learn the dynamic sequential information in a user sequence graph and the static collaborative information in the cross-domain global graph, while the latter view is to capture the complementary information of the user’s preferences from different domains. Considering that there are multiple domains in real-world scenarios, we further extend MGCL to MGCL+ for multi-domain sequential recommendation and design multi-domain adaptive gated networks to alleviate the negative transfer problem. Extensive empirical studies on three real-world datasets demonstrate that our MGCL and MGCL+ significantly outperforms the state-of-the-art methods.
Tianzi ZangYanmin ZhuRuohan ZhangChunyang WangKe WangJiadi Yu
Zhong LiuZhouji LiangMinlong HuangTingjuan LiYongqiang HuXiaoming Zhang
Jiangxia CaoXin CongJiawei ShengTingwen LiuBin Wang
Zeyuan MengIadh OunisCraig MacdonaldZixuan Yi