JOURNAL ARTICLE

Multimodal Recommendation System Based on Cross Self-Attention Fusion

Peishan LiWeixiao ZhanLutao GaoShuran WangLinnan Yang

Year: 2025 Journal:   Systems Vol: 13 (1)Pages: 57-57   Publisher: Multidisciplinary Digital Publishing Institute

Abstract

Recent advances in graph neural networks (GNNs) have enhanced multimodal recommendation systems’ ability to process complex user–item interactions. However, current approaches face two key limitations: they rely on static similarity metrics for product relationship graphs and they struggle to effectively fuse information across modalities. We propose MR-CSAF, a novel multimodal recommendation algorithm using cross-self-attention fusion. Building on FREEDOM, our approach introduces an adaptive modality selector that dynamically weights each modality’s contribution to product similarity, enabling more accurate product relationship graphs and optimized modality representations. We employ a cross-self-attention mechanism to facilitate both inter- and intra-modal information transfer, while using graph convolution to incorporate updated features into item and product modal representations. Experimental results on three public datasets demonstrate MR-CSAF outperforms eight baseline methods, validating its effectiveness in providing personalized recommendations, advancing the field of personalized recommendation in complex multimodal environments.

Keywords:
Fusion Computer science Artificial intelligence

Metrics

5
Cited By
48.32
FWCI (Field Weighted Citation Impact)
41
Refs
0.99
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Recommender Systems and Techniques
Physical Sciences →  Computer Science →  Information Systems
Image Retrieval and Classification Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.