This work suggests utilising voice and electrocardiogram (ECG) inputs to train a deep learning-based RNN system to recognise emotions. The voice and ECG signals are fed into a recurrent neural network (RNN), which extracts the characteristics from the signals. For emotion classification, the collected features are subsequently processed into a fully linked layer. On a dataset of speech and ECG recordings taken from people during emotional elicitation tasks, the suggested method is assessed and found to perform very accurate emotion detection when compared to conventional approaches. The experiment indicate that the proposed approach is effective in recognizing emotions using both speech and ECG signals, highlighting the potential of this approach for real-world applications.
Yichen FengXinfeng YeSathiamoorthy Manoharan
Chung‐Hsien WuJen‐Chun LinWen-Li WeiKuan-Chun Cheng
Simon DobrišekRok GajšekFrance MiheličNikola PavešićVitomir Štruc
Qi ZhuChuhang ZhengZheng ZhangWei ShaoDaoqiang Zhang