Hanqian WuXinwei LiLu LiQipeng Wang
Multi-modal propaganda techniques detection (MPTD) aims to detect the types of propaganda techniques used in memes. However, dominant MPTD models exhibit a greater seman-tic gap compared to downstream tasks, which means a large number of multi-modal data are required. In this paper, we propose a low-resource approach to detect the types of pro-paganda techniques used in memes with a focus on both tex-tual and image modalities. Specifically, we design a prompt-based multi-modal fine-tuning schema to incorporate the vi-sual clues into the language model. Our analysis of the corpus shows that our approach in a low-resource setting achieves great effectiveness. This is further confirmed in our experi-ments with several state-of-the-art models.
Daijun DingHu HuangBowen ZhangCheng PengYangyang LiXianghua FuLiwen Jing
Ye WangYi ZhuYun LiLiting WeiYunhao YuanJipeng Qiang
Kang-Min KimMingyu LeeHyun-Sik WonM KimYeachan KimSangKeun Lee
Yawen ZengNing HanKeyu PanQin Jin
Xiaoyu QiuHao FengYuechen WangWengang ZhouHouqiang Li