S. G. ShailaA. SindhuL. MonishD. ShivammaB. Vaishali
Nowadays, emotion recognition and classification plays a vital role in the field of Human-Computer Interaction (HCI).Emotions are being recognized through behaviors of body such as facial expression, voice tone, and body movement.The present research considers Speech Emotion Recognition (SER) as one of the foremost used modality to identify emotions.SER dataset contains the four different datasets, Ravdess dataset is used in this project.This mechanism is used due to its high temporal resolution with no risks and less cost.Over the last decades, many researchers involved SER signals in sequence to cope up with Brain-Computer Interface (BCI) to detect emotions.It includes removing noises from audio signals, extracting temporal or spectral features from the audio signals, analysis on time or frequency domain respectively, and eventually, designing a multi-class classification strategy.The paper discusses the approach of identifying and classifying human emotions based on audio signals.The approach used machine learning technique such as Random Forest (RF), Multilayer Perceptron (MLP), Support Vector Machine (SVM), Convolution Network (CNN), and Decision Tree (DT) Models for classification.The obtained experimental result seems to be promising with good accuracy in the emotion classification.
Y.P. SinghNeetu NeetuShikha Rani
S. VijayalakshmiK. R. KavithaR. A. AtchayashreeM. T. Akshaya
Prof. Kinjal S. RajaProf. Disha D. Sanghani
R. AnushaP. SubhashiniD. Naga JyothiPotturi HarshithaJ. SushmaNamsamgari Mukesh
Aman KumarVivek KumarP Rajakumar