JOURNAL ARTICLE

Momentum Contrastive Learning for Sequential Recommendation

Abstract

Contrastive self-supervised learning (SSL) based Sequential Recommendations (SR) have recently achieved significant performance improvements in addressing the data sparsity problem, which hinders learning high-quality user representations. However, current contrastive SSL based models ignore the importance of consistency between sample pairs. Consistency means the similarity degree between the feature representation of encoded sample pairs, and the higher the consistency, the better the feature learning. To figure out the benefits of consistency and utilize it effectively, Momentum Contrastive Learning for Sequential Recommendation (MCL4SRec) is designed. Existing experiments on four public datasets demonstrate the superiority of MCL4SRec, which achieves state-of-the-art performance over existing baselines.

Keywords:
Consistency (knowledge bases) Computer science Feature (linguistics) Artificial intelligence Quality (philosophy) Similarity (geometry) Sample (material) Machine learning Feature learning Representation (politics) Natural language processing Data mining Linguistics

Metrics

1
Cited By
0.62
FWCI (Field Weighted Citation Impact)
16
Refs
0.66
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Recommender Systems and Techniques
Physical Sciences →  Computer Science →  Information Systems
Face recognition and analysis
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.