R. Senthamizh SelviJeyalakshmi ChelliahT. DineshkumarManjunathan Alagarsamy
In real life, recognizing human Emotions is the important role because of reactions to different emotional expresses which affect brain signals. Many scientific researchers have been concerned with construction of an automatic system to recognize human emotions. Machines have the ability to recognize human emotions; they can actually look inside the user’s mind and act based on the observed mental position. This is possible by the efficient algorithms used to extract features, building the models and the classification algorithms. Nowadays, speech Signals have more attention to this kind of research, though 35% of emotions can be realized from speech. This research mainly focuses on proposing an automated method for identifying emotions using CNN model based on the Speech signals. Using branched CNN and MFCC, features 85% accuracy has been achieved. The same models can also be applied to different emotional datasets. In addition to speech signals, face images are also involved to make sure that the predicted emotion is with maximum accuracy. For the emotion recognition using the face expressions, CNN model is developed and by usage of FER database, model is trained and tested with an accuracy of 68%. Hence it is perceived that emotion predicted from speech signals performs high when related to facial emotions. Higher accuracy can be acquired by the same system by fusing both facial emotions with speech in future.
Kah Phooi SengLi-Minn AngChien Shing Ooi
Mingli SongJiajun BuChun ChenNan Li
Esam GhalebMirela PopaStylianos Asteriadis