JOURNAL ARTICLE

Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal

Zhiyan HanJian Wang

Year: 2016 Journal:   MATEC Web of Conferences Vol: 61 Pages: 03012-03012   Publisher: EDP Sciences

Abstract

\nIn order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN). Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.\n

Keywords:
Speech recognition Computer science Facial expression Feature (linguistics) Pattern recognition (psychology) Artificial intelligence SIGNAL (programming language) Fuse (electrical) Majority rule Feature selection Emotion classification Expression (computer science) Emotion recognition Engineering

Metrics

4
Cited By
0.57
FWCI (Field Weighted Citation Impact)
20
Refs
0.77
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Emotion and Mood Recognition
Social Sciences →  Psychology →  Experimental and Cognitive Psychology
Face and Expression Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Color perception and design
Social Sciences →  Psychology →  Social Psychology

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.