This study focuses on overcoming communication challenges for individuals with hearing impairments by exploring automated recognition of sign language signals. People with hearing impairments can communicate better and engage with the community more effectively when automated sign language signals are recognised. Utilizing deep learning, particularly Long short-term memory (LSTM) algorithms, the system analyses images or videos of manual sign signals captured by a camera, predicting corresponding sign language expressions. This innovative approach, leveraging temporal dependencies in gestures, aims to efficiently reduce the communication gap, providing an advanced and effective solution for sign language interpretation.
A S Sushmitha UrsVaibhavi B RajS PoojaPrasanna Kumar KB R MadhuVinod Kumar S
Neetu SinglaVinayak ChoubeySwarnima Rai
P. IlanchezhianIshanvi SinghM. BalajiA. Manoj KumarS. Muhamad Yaseen