JOURNAL ARTICLE

Multimodal Fusion with Co-Attention Networks for Fake News Detection

Abstract

Fake news with textual and visual contents has a better story-telling ability than text-only contents, and can be spread quickly with social media.People can be easily deceived by such fake news, and traditional expert identification is labor-intensive.Therefore, automatic detection of multimodal fake news has become a new hot-spot issue.A shortcoming of existing approaches is their inability to fuse multimodality features effectively.They simply concatenate unimodal features without considering inter-modality relations.Inspired by the way people read news with image and text, we propose a novel Multimodal Co-Attention Networks (MCAN) to better fuse textual and visual features for fake news detection.Extensive experiments conducted on two realworld datasets demonstrate that MCAN can learn inter-dependencies among multimodal features and outperforms state-of-the-art methods.

Keywords:
Fuse (electrical) Multimodality Fake news Identification (biology) Visualization Social media Sensor fusion

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.57
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Misinformation and Its Impacts
Social Sciences →  Social Sciences →  Sociology and Political Science
Big Data and Digital Economy
Physical Sciences →  Computer Science →  Information Systems
Spam and Phishing Detection
Physical Sciences →  Computer Science →  Information Systems
© 2026 ScienceGate Book Chapters — All rights reserved.