ZHANG Tianzhi, ZHOU Gang, LIU Hongbo, LIU Shuo, CHEN Jing
Multimodal aspect-based sentiment analysis is an emerging task in multimodal sentiment analysis field,which aims to identify the sentiment of each given aspect in text and image.Although recent research on multimodal sentiment analysis has made breakthrough progress,most existing models only use simple concatenation in multimodal feature fusion without considering whether there is semantically irrelevant information in image with text,which may introduce additional interference to the model.To address the above problems,this paper proposes a text-image gated gusion mechanism(TIGFM) model for multimodal aspect-based sentiment analysis,which introduces adjective-noun pairs(ANPs) extracted from the dataset images while text interacts with image,and treats the weighted adjectives as image auxiliary information.In addition,multimodal feature fusion is achieved by constructing a gating mechanism that dynamically controls the input of image and image auxiliary information in feature fusion stage.Experimental results demonstrate that TIGFM model achieves competitive results on two Twitter datasets,and then validate the effectiveness of proposed method.
Tianzhi ZhangGang ZhouJicang LuZhibo LiHao WuShuo Liu
Qianlong WangHongling XuZhiyuan WenBin LiangMin YangBing QinRuifeng Xu
Jinghong WangYuan GaoHaokang Li