The ability to converse with hearing and deaf persons has always been difficult for those who are tongue-tied. In this paper, we can see different methods which are introduced to help them to communicate effectively. There are many human interpreters or assistant tools to help them communicate, but each person cannot afford that aid. The only mode of communication for them is sign language. Therefore, the project's primary goal is to assist those individuals by providing a system that will recognize the signs, translate them into text, and enable them to lead a normal social life. Previously, a method including hand detection had been developed as a learning tool for novices in sign language. The system was developed using a method based on skin color modeling known as explicit skin-color space thresholding. The specified range of skin tones will distinguish between pixels, or the hand, and non-pixels, or the background. The photos were given as input to a model called the CNN a deep learning algorithm. We will be implementing this project using Keras to train the images. This document provides information on a variety of projects/research on sign language detection in the domains of machine learning, deep learning, and image depth data. This study considers a number of the numerous problems that must be overcome in order to overcome this problem, as well as the future scope.
Swapnil ShindeParikshit N. MahalleSayee PanchalShreya MahalleAchal K. SrivastavaParag Tonpe
Anushka RayShahbaz SyedS. PoornimaM. PushpalathaK BantupalliY XieA WadhawanP KumarY YeY TianM HuenerfauthJ LiuK WangchukP RiyamongkolR WaranusastA MittalP KumarP RoyR BalasubramanianB ChaudhuriT ChongB LeeD JiangG LiY SunJ KongB TaoL YangJ ChenW ZhuA MoryossefA HalderA TayadeS Ibrahim
Rutuja R. ChabukswarPranali V. ChavanKavita S. Oza
Sai Rahul Reddy NallapareddyD. Viji