JOURNAL ARTICLE

A Multimodal Graph Recommendation Method Based on Cross-Attention Fusion

Kai LiLong XuCheng ZhuKunlun Zhang

Year: 2024 Journal:   Mathematics Vol: 12 (15)Pages: 2353-2353   Publisher: Multidisciplinary Digital Publishing Institute

Abstract

Research on recommendation methods using multimodal graph information presents a significant challenge within the realm of information services. Prior studies in this area have lacked precision in the purification and denoising of multimodal information and have insufficiently explored fusion methods. We introduce a multimodal graph recommendation approach leveraging cross-attention fusion. This model enhances and purifies multimodal information by embedding the IDs of items and their corresponding interactive users, thereby optimizing the utilization of such information. To facilitate better integration, we propose a cross-attention mechanism-based multimodal information fusion method, which effectively processes and merges related and differential information across modalities. Experimental results on three public datasets indicated that our model performed exceptionally well, demonstrating its efficacy in leveraging multimodal information.

Keywords:
Computer science Fusion Graph Artificial intelligence Information retrieval Theoretical computer science Linguistics Philosophy

Metrics

5
Cited By
7.64
FWCI (Field Weighted Citation Impact)
43
Refs
0.95
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Recommender Systems and Techniques
Physical Sciences →  Computer Science →  Information Systems
Image Retrieval and Classification Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Graph Neural Networks
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.