JOURNAL ARTICLE

Multimodal Emotion Recognition using Deep Learning

Abstract

New research into human-computer interaction seeks to consider the consumer's emotional status to provide a seamless human-computer interface. This would make it possible for people to survive and be used in widespread fields, including education and medicine. Multiple techniques can be defined through human feelings, including expressions, facial images, physiological signs, and neuroimaging strategies. This paper presents a review of emotional recognition of multimodal signals using deep learning and comparing their applications based on current studies. Multimodal affective computing systems are studied alongside unimodal solutions as they offer higher accuracy of classification. Accuracy varies according to the number of emotions observed, features extracted, classification system and database consistency. Numerous theories on the methodology of emotional detection and recent emotional science address the following topics. This would encourage studies to understand better physiological signals of the current state of the science and its emotional awareness problems.

Keywords:
Affective computing Consistency (knowledge bases) Feeling Computer science Emotion recognition Deep learning Artificial intelligence Affective science Interface (matter) Emotion classification Human–computer interaction Cognitive psychology Psychology Social psychology

Metrics

296
Cited By
44.71
FWCI (Field Weighted Citation Impact)
76
Refs
1.00
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Emotion and Mood Recognition
Social Sciences →  Psychology →  Experimental and Cognitive Psychology
EEG and Brain-Computer Interfaces
Life Sciences →  Neuroscience →  Cognitive Neuroscience
ECG Monitoring and Analysis
Health Sciences →  Medicine →  Cardiology and Cardiovascular Medicine
© 2026 ScienceGate Book Chapters — All rights reserved.