Sathish, SravanakumarM, Roshan Aditya
In this paper, a novel framework for music selection is presented that incorporates speech analysis and facial expression-based emotion recognition. The system is designed to provide personalized music recommendations based on the user's current emotional state. A single convolutional neural network (CNN) model trained on real-time data streams is used for emotion recognition. Speech Recognition and OpenCV libraries are utilized for voice and facial emotion detection respectively. Music recommendations are generated using the OpenAI API, leveraging large language models to suggest songs aligned with the detected emotion. The proposed approach offers a personalized, AI-driven music experience and represents a significant advancement in the integration of emotional intelligence with recommendation systems.
Sathish, SravanakumarM, Roshan Aditya
Pranav SonawanePranil SonawaneAbhijit MoreAshutosh MundeRupali Jadhav
Anjali KulkarniS. PrajwalHarika JayanthiK. S. Sowmya