In the realm of multimodal emotion recognition, the processing of diverse data modalities such as audio, text, and video is a necessity. Yet, existing machine perception models predominantly aim at optimizing the handling of specific modalities, subsequently fusing the representations or predictions of each modality in later stages. These multimodal classification algorithms chiefly depend on the complementarity among different modalities to augment classification performance. However, they often grapple with challenges such as insufficient data and excessive computations while exploiting the complementary nature of multimodal information. To circumvent these issues, we introduce a multimodal fusion network, DynamicMBFN. This network implements dynamic evaluation strategies and sparse gating mechanisms to apprehend the information variations within each modality's features. Furthermore, we bring forward a bottleneck mechanism to compel the model to arrange and condense information within each modality, simultaneously sharing requisite information. Experimental findings on the IEMOCAP dataset substantiate that our algorithm not only ameliorates the performance of multimodal information fusion but also effectively mitigates computational costs. Thus, our model offers an efficacious solution for multimodal data processing and carries substantial practical implications for accomplishing dependable multimodal fusion.
Peng HeJun YuChengjie GeWei JiaW. L. XuLei WangTianyu LiuZhen Kan
Kai LiYa HuangGang ZhongYolwas NurmemetSilamu Wushouer
Dou HuXiaolong HouLingwei WeiLianxin JiangYang Mo
Ying WangJianjun LeiXiangwei ZhuTao Zhang
Peicong YuanGuoyong CaiMing ChenXiaolv Tang