M. H. LiDingkang YangLihua Zhang
Multimodal Sentiment Analysis (MSA) has attracted widespread research attention recently. Most MSA studies are based on the assumption of signal completeness. However, many inevitable factors in real applications lead to uncertain signal missing, causing significant degradation of model performance. To this end, we propose a Robust multimodal Missing Signal Framework (RMSF) to handle the problem of uncertain signal missing for MSA tasks and can be generalized to other multimodal patterns. Specifically, a hierarchical cross modal interaction module in RMSF exploits potential complementary semantics among modalities via coarse- and fine-grained cross modal attention. Furthermore, we design an adaptive feature refinement module to enhance the beneficial semantics of modalities and filter redundant features. Finally, we propose a knowledge integrated self-distillation module that enables dynamic knowledge integration and bidirectional knowledge transfer within a single network to precisely reconstruct missing semantics. Comprehensive experiments are conducted on two datasets, indicating that RMSF significantly improves MSA performance under both uncertain missing-signal and complete-signal cases.
Jiandian ZengTianyi LiuJiantao Zhou
Jiandian ZengJiantao ZhouTianyi Liu
Zhizhong LiuBin ZhouDianhui ChuYuhang SunLingqiang Meng
Guilin LanYeqian DuZhouwang Yang