JOURNAL ARTICLE

Enhancing ID-based Recommendation with Large Language Models

Abstract

Large language models (LLMs) have recently garnered significant attention in various domains, including recommendation systems. Recent research leverages the capabilities of LLMs to improve the performance and user modeling aspects of recommender systems. These studies primarily focus on utilizing LLMs to interpret textual data in recommendation tasks. However, it's worth noting that in ID-based recommendations, textual data is absent, and only ID data is available. The untapped potential of LLMs for ID data within the ID-based recommendation paradigm remains relatively unexplored. To this end, we introduce a pioneering approach called “LLM for ID-based recommendation” (LLM4IDRec). This innovative approach integrates the capabilities of LLMs while exclusively relying on ID data, thus diverging from the previous reliance on textual data. The basic idea of LLM4IDRec is that by employing LLM to augment ID data, if augmented ID data can improve recommendation performance, it demonstrates the ability of LLM to interpret ID data effectively, exploring an innovative way for the integration of LLM in ID-based recommendation. Specifically, we first define a prompt template to enhance LLM's ability to comprehend ID data and the ID-based recommendation task. Next, during the process of generating training data using this prompt template, we develop two efficient methods to capture both the local and global structure of ID data. We feed this generated training data into the LLM and employ LoRA for fine-tuning LLM. Following the fine-tuning phase, we utilize the fine-tuned LLM to generate ID data that aligns with users’ preferences. We design two filtering strategies to eliminate invalid generated data. Thirdly, we can merge the original ID data with the generated ID data, creating augmented data. Finally, we input this augmented data into the existing ID-based recommendation models without any modifications to the recommendation model itself. We evaluate the effectiveness of our LLM4IDRec approach using three widely used datasets. Our results demonstrate a notable improvement in recommendation performance, with our approach consistently outperforming existing methods in ID-based recommendation by solely augmenting input data.

Keywords:
Computer science Natural language processing Linguistics Philosophy

Metrics

4
Cited By
6.11
FWCI (Field Weighted Citation Impact)
67
Refs
0.94
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Recommender Systems and Techniques
Physical Sciences →  Computer Science →  Information Systems
Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Image Retrieval and Classification Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

JOURNAL ARTICLE

LLMCDSR: Enhancing Cross-Domain Sequential Recommendation with Large Language Models

Haoran XinYing SunChao WangHui Xiong

Journal:   ACM Transactions on Information Systems Year: 2025 Vol: 43 (5)Pages: 1-33
JOURNAL ARTICLE

CIT-Rec: Enhancing Sequential Recommendation System with Large Language Models

Ziyu LiZhen ChenXuejing FuTong MoWeiping Li

Journal:   Computers, materials & continua/Computers, materials & continua (Print) Year: 2025 Vol: 0 (0)Pages: 1-10
JOURNAL ARTICLE

Enhancing Recommendation Diversity by Re-ranking with Large Language Models

Diego CarraroDerek Bridge

Journal:   ACM Transactions on Recommender Systems Year: 2024 Vol: 4 (2)Pages: 1-40
© 2026 ScienceGate Book Chapters — All rights reserved.