JOURNAL ARTICLE

Hybrid Learning with Teacher-student Knowledge Distillation for Recommenders

Abstract

Latent variable models have been widely adopted by recommender systems due to the advancements of their learning scalability and performance. Recent research has focused on hybrid models. However, due to the sparsity of user and/or item data, most of these proposals have convoluted model architectures and objective functions. In particular, the latter are mostly tailored for sparse data from either user or item spaces. Although it is possible to derive an analogous model for both spaces, this makes a system overly complicated. To address this problem, we propose a deep learning based latent model called Distilled Hybrid Network (DHN) with a teacher-student learning architecture. Unlike other related work that tried to better incorporate content components to improve accuracy, we instead focus on model learning optimization. To the best of our knowledge, we are the first to employ teacher-student learning architecture for recommender systems. Experiment results show that our proposed model notably outperforms state-of-the-art approaches. We also show that our proposed architecture can be applied to existing recommender models to improve their accuracies.

Keywords:
Computer science Recommender system Scalability Artificial intelligence Machine learning Deep learning Focus (optics) Latent variable Distillation Architecture Database

Metrics

2
Cited By
0.00
FWCI (Field Weighted Citation Impact)
53
Refs
0.33
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Recommender Systems and Techniques
Physical Sciences →  Computer Science →  Information Systems
Image Retrieval and Classification Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Graph Neural Networks
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.