With an emphasis on improving accessibility for those with visual impairments by utilizing real-time aural feedback, this study presents a novel method for identifying emotions and facial expressions. Effective communication relies heavily on facial expressions, but people who are blind or visually impaired sometimes find it difficult to interpret the emotional messages that are expressed through nonverbal facial cues. This project attempts to create an inclusive system that helps visually impaired individuals identify and understand emotions in their social surroundings by utilizing state-of-the-art computer vision algorithms in combination with aural feedback. The suggested method accurately detects and classifies a range of facial emotions from live video streams using a deep learning-based architecture. When an emotion is identified, the system uses synthesized audio cues to tell the user
Omkar Sanjay RanjaneMr.Amol Ravikant ShetyeMr.Nayan Ashok Sangare
Santosh KumarShubam JaiswalRahul KumarSanjay Kumar Singh
Jay Naimesh PatelJinan Fiaidhi