Real-world recommendation scenarios usually need to handle diverse user-item interaction behaviours, including page views, adding items into carts, and purchasing activities. The interactions that precede the actual target behaviour (e.g., purchasing an item) allow to capture the user’s preferences from different angles, and are used as auxiliary information (e.g., page views) to enrich the system’s knowledge about the users’ preferences, thereby helping to enhance recommendation for the target behaviour. Despite efforts in modelling the users’ multi-behaviour interaction information, the existing multi-behaviour recommenders still face two challenges: (1) Data sparsity across multiple user behaviours is a common issue that limits the recommendation performance, particularly for the target behaviour, which typically exhibits fewer interactions compared to other auxiliary behaviours; (2) Noisy auxiliary interactive behaviours where the information in the auxiliary behaviours might be non-relevant for recommendation. In this case, a direct adoption of contrastive learning between the target behaviour and the auxiliary behaviours will amplify the noise in the auxiliary behaviours, thereby negatively impacting the real semantics that can be derived from the target behaviour. To address these two challenges, we propose a new model called Knowledge-Enhanced Multi-behaviour Contrastive Learning for Recommendation (KEMCL). In particular, to address the problem of sparse user multi-behaviour interaction information, we leverage a dual-perspective knowledge encoding componentthat enriches the semantic representations of items, and generate supervision signals through self-supervised learning so as to enhance recommendation. In addition, we develop a cross-behaviour learning component, which includes two contrastive learning (CL) methods, inter CL and intra CL, to alleviate the problem of noisy auxiliary interactions. Extensive experiments on three public recommendation datasets show that our proposed KEMCL model significantly outperforms seven existing state-of-the-art methods. In particular, our KEMCL model outperforms the best baseline, namely KMCLR, by 5.42% on the large Tmall dataset.
Yuru LiuYong XuCheng LiChuang ShiQun Fang
Weijun XuFan GaoJinsong SuQingqiang WuMeihong Wang