Sign language is the principal mode of communication for the deaf and dumb community. Conversing with people having such disabilities is a vital challenge faced by normal people. As a result, there is a need for solutions that can considerably reduce the requirement for human interpreters and lead to more easy communication between non-signers and signers. The goal of this study is to create an efficient model that can precisely identify gestures and transform them into a perceivable format. Deep learning models such as CNN (Convolution Neural Network) and YOLOv5 (You Only Look Once) are utilized in this study to effectively predict classes by minimizing the number of parameters of the high dimensional sign picture (as each pixel is considered as a feature). The built models were evaluated using Mean Average Precision. The experimental results indicate that the proposed solution achieves the maximum accuracy, with a Mean Average Precision of 98.8%.
A. Geetha DeviM. AparnaN. MounikaU. Pavan kalianRadhika Nath
Gayatri AmrutkarManasi S. ShindeKalyani ThattekarMrudul Dixit
Soeb HussainRupal SaxenaHan XieJameel Ahmed KhanHyunchul Shin
Sri Lakshmi Sruthi KoppuravuriPujitha NekkalapudiLavanya Durga MucharlaSarika Sowjanya MudunuriKusuma Gayathri Kothapalli