Guang YangDeguo YangJiaxin LiGuojun Wang
Accurate emotion recognition can greatly improve the reliability of interpersonal communication and mental illness diagnosis. Aiming at the problem that the accuracy of single-modal emotion recognition is difficult to improve, a multimodal emotion recognition method is proposed by fusing speech, facial expression and EEG. The attention mechanism is introduced in the feature layer fusion, and the recognition accuracy is improved by designing the optimal weight allocation algorithm of the decision layer fusion. The results show that the recognition accuracy is 94.53% on the training set MAHNOB-HCI, 92.89% on the SEED dataset, and 91.54% on the self-built dataset, which indicates that the recognition accuracy is improved compared with the single-modal emotion recognition, and has good generalization ability.
Yücel ÇimtayErhan EkmekcioǧluSeyma Caglar‐Ozhan
Yurui XuXiao WuHang SuXiaorui Liu
Puneet KumarVedanti KhokherYukti Hari GuptaBalasubramanian Raman
Yinggang XieNannan ZhouShijuan Zhu