Chuyuan WeiKe DuanShengda ZhuoHongchun WangShuqiang HuangJie Liu
Recommender systems have long struggled with challenges such as cold start and data sparsity, which can lead to poor recommendation performance. While previous approaches have attempted to address these issues by incorporating side information, they often introduce noise, lack flexibility for data expansion, and suffer from inconsistent data quality—factors that hinder accurate user preference inference and reduce recommendation performance. With the vast knowledge bases and advanced reasoning capabilities of large language models (LLMs), these models are particularly well-suited to supplement auxiliary information and capture implicit user intent. To address these challenges, we propose a novel framework, ER2ALM, which leverages the capabilities of LLMs enhanced by Retrieval-Augmented Generation (RAG) to improve recommendation outcomes. Our framework specifically addresses the challenges by flexibly and accurately augmenting auxiliary information and capturing users’ implicit preferences and interests. Additionally, to mitigate the risk of introducing noise, we incorporate a noise reduction strategy to ensure the reliability of the augmented information. Experimental validation on two real-world datasets demonstrates the efficacy of our approach, significantly enhancing both the accuracy and robustness of recommendations compared to state-of-the-art methods. This demonstrates the potential of our framework as a new paradigm for preference mining in recommendation systems.
Sichun LuoJian XuXiaojie ZhangLinrong WangSicong LiuHanxu HouLinqi Song
Jian XuSichun LuoXiangyu ChenHaoming HuangHanxu HouLinqi Song
Yuhan LiXinni ZhangLinhao LuoHeng ChangYuxiang RenIrwin KingJia Li
Cihan XiaoZejiang HouDaniel Garcia-RomeroKyu J Han
Arif WibowoSilmi FauziatiRudy Hartanto