Yang WuPengwei ZhanYunjian ZhangLiming WangZhen Xu
Fake news with textual and visual contents has a better story-telling ability than text-only contents, and can be spread quickly with social media.People can be easily deceived by such fake news, and traditional expert identification is labor-intensive.Therefore, automatic detection of multimodal fake news has become a new hot-spot issue.A shortcoming of existing approaches is their inability to fuse multimodality features effectively.They simply concatenate unimodal features without considering inter-modality relations.Inspired by the way people read news with image and text, we propose a novel Multimodal Co-Attention Networks (MCAN) to better fuse textual and visual features for fake news detection.Extensive experiments conducted on two realworld datasets demonstrate that MCAN can learn inter-dependencies among multimodal features and outperforms state-of-the-art methods.
Yang WuPengwei ZhanYunjian ZhangLiming WangZhen Xu
Yang WuPengwei ZhanYunjian ZhangLiming WangZhen Xu
Jing JingHongchen WuJie SunXiaochang FangHuaxiang Zhang
Nguyễn Mạnh TuấnPhạm Quang Minh
Linmei HuZiwang ZhaoWeijian QiXuemeng SongLiqiang Nie