A. Sunitha NandhiniD Shiva RoopanS ShiyaamS Yogesh
Abstract Sign language is a lingua among the speech and hearing-impaired community. It is hard for most people who are not familiar with sign language to communicate without an interpreter. Sign language recognition appertains to track and recognize the meaningful emotion of human-made with head, arms, hands, fingers, etc. The technique that has been implemented here, transcribes the gestures from sign language to a spoken language which is easily understood by the listening. The gestures that have been translated include alphabets, words from static images. This becomes more important for the people who completely rely on gestural sign language for communication tries to communicate with a person who does not understand the sign language. Most of the systems that are under use face a recognition problem with the skin tone, by introducing a filter it will identify the symbols irrespective of the skin tone. The aim is to represent features that will be learned by a system known as convolutional neural networks (CNN), which contains four types of layers: convolution layers, pooling/subsampling layers, nonlinear layers, and fully connected layers.
Dr.Tejaswi PotluriAisha MehnazTandure VarshaT. Kalpana PriyadharshiniV Bhavana
Pradyumna P. KulkarniSuraj S. BhuteAkash P. Wagh
Ayush KumarSumeet KumarShivam SinghVinod Jha