JOURNAL ARTICLE

Data Poisoning Attacks against Differentially Private Recommender Systems

Abstract

Recommender systems based on collaborative filtering are highly vulnerable to data poisoning attacks, where a determined attacker injects fake users with false user-item feedback, with an objective to either corrupt the recommender system or promote/demote a target set of items. Recently, differential privacy was explored as a defense technique against data poisoning attacks in the typical machine learning setting. In this paper, we study the effectiveness of differential privacy against such attacks on matrix factorization based collaborative filtering systems. Concretely, we conduct extensive experiments for evaluating robustness to injection of malicious user profiles by simulating common types of shilling attacks on real-world data and comparing the predictions of typical matrix factorization with differentially private matrix factorization.

Keywords:
Recommender system Computer science Computer security World Wide Web

Metrics

6
Cited By
0.44
FWCI (Field Weighted Citation Impact)
5
Refs
0.68
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Privacy-Preserving Technologies in Data
Physical Sciences →  Computer Science →  Artificial Intelligence
Cryptography and Data Security
Physical Sciences →  Computer Science →  Artificial Intelligence
Privacy, Security, and Data Protection
Social Sciences →  Social Sciences →  Sociology and Political Science
© 2026 ScienceGate Book Chapters — All rights reserved.