Karampudi Dishank Jagadeeshnaidu
Sign Language is a form of communication primarily used by individuals who are deaf and mute, employing hand movements and gestures.This study presents a proposed solution for recognizing hand gestures through the utilization of a Deep Learning Algorithm called Convolution Neural Network (CNN).The CNN is responsible for processing the images and making predictions about the gestures.The research focuses on the recognition of five hand gestures from the American Sign Language.The suggested system consists of various components, including pre -processing and feature extraction, model training and testing, and the conversion of sign language into text.To enhance the accuracy of recognition, different CNN architectures like VGG19 were employed, and pre -processing techniques such as greyscale and resizing were designed and evaluated using our dataset.
Babita SonareAditya PadgalYash GaikwadAniket Patil
Sunil L. BangareRohit P. GaikwadMehtab Begum SiddiquiOmkar M. GhagAkash R. Tarote
Rishab LakhotraAbhishek SinghShubham YadavJ. E. Kamalasekaran
Bethania GutierrezEver AlfonzoJulio PacielloChristian von Lücken
Angela C. CaliwagStephen Ryan AngsantoWansu Lim