B TejaswiniR AditiSharon SaraM Shubha
In order to improve communication accessibility for people with speech and hearing impairments, sign language recognition is essential. A 3D Convolutional Neural Network (3D-CNN) is used in this study for sign language recognition system to categorize hand gestures from video clips. The suggested model ensures high accuracy and efficiency by classifying and extracting features from video frames. Several sign gestures are included in the dataset, which has undergone extensive preprocessing methods like frame extraction, augmentation, and normalization to increase robustness. By incorporating multilingual translation capabilities, the system expands its accessibility by translating recognized gestures into text in Hindi, Kannada, and English. The model's efficiency in practical applications, such as assistive communication devices and human-computer interaction, is demonstrated by our experimental results, which show an astounding 99% accuracy. This study aids in the advancement of inclusive AI-powered solutions that help people with hearing loss communicate.
B TejaswiniR AditiSharon SaraM Shubha
Dr. P. SivakumarAmrithaa I SSandhiya AJanani TKarthikeyani S
P. SivakumarI S AmrithaaA SandhiyaT JananiS Karthikeyani
Pooja SharmaSonal BalpandeN. SharmaGanesh GoursheteDisha BendaleGeetanjali Rohan Kalme
Vaishaal ShankarB. Tharun KumarNavdeep KumarV. Venkat Charan Reddy