JOURNAL ARTICLE

Multimodal Transformer Fusion for Continuous Emotion Recognition

Abstract

Multimodal fusion increases the performance of emotion recognition because of the complementarity of different modalities. Compared with decision level and feature level fusion, model level fusion makes better use of the advantages of deep neural networks. In this work, we utilize the Transformer model to fuse audio-visual modalities on the model level. Specifically, the multi-head attention produces multimodal emotional intermediate representations from common semantic feature space after encoding audio and visual modalities. Meanwhile, it also can learn long-term temporal dependencies with self-attention mechanism effectively. The experiments, on the AVEC 2017 database, shows the superiority of model level fusion than other fusion strategies. Moreover, we combine the Transformer model and LSTM to further improve the performance, which achieves better results than other methods.

Keywords:
Computer science Modalities Transformer Artificial intelligence Fusion Fusion mechanism Speech recognition Complementarity (molecular biology) Emotion recognition Audio visual Visualization Fuse (electrical) Pattern recognition (psychology) Machine learning Engineering

Metrics

153
Cited By
14.83
FWCI (Field Weighted Citation Impact)
25
Refs
0.99
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Emotion and Mood Recognition
Social Sciences →  Psychology →  Experimental and Cognitive Psychology
Color perception and design
Social Sciences →  Psychology →  Social Psychology
Speech and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
© 2026 ScienceGate Book Chapters — All rights reserved.