Multimodal emotion recognition has attracted much attention recently. Fusing multiple modalities effectively with limited labeled data is a challenging task. Considering the success of pre-trained model and fine-grained nature of emotion expression, we think it is reasonable to take these two aspects into consideration. Unlike previous methods that mainly focus on one aspect, we introduce a novel multi-granularity framework, which combines fine-grained representation with pre-trained utterance-level representation. Inspired by Transformer TTS, we propose a multilevel transformer model to perform fine-grained multimodal emotion recognition. Specifically, we explore different methods to incorporate phoneme-level embedding with word-level embedding. To perform multi-granularity learning, we simply combine multilevel transformer model with Bert. Extensive experimental results show that multilevel transformer model outperforms previous state-of-the-art approaches on IEMOCAP dataset. Multi-granularity model achieves additional performance improvement.
Sharath KoorathotaZain Ahmad KhanPawan LapborisuthPaul Sajda
Jian HuangJianhua TaoBin LiuZheng LianMingyue Niu
Yuanyuan LiuHaoyu ZhangYibing ZhanZijing ChenGuanghao YinLin WeiZhe Chen
Guanghao YinYuanyuan LiuTengfei LiuHaoyu ZhangFang FangChang TangLiangxiao Jiang
Yuanyuan WangYu GuYifei YinYingping HanHe ZhangShuang WangChenyu LiDou Quan