This paper presents a novel approach to hand gesture detection and recognition using machine learning techniques. The proposed approach employs Recurrent Convolutional Neural Networks (RCNNs) for hand gesture recognition, which outperforms traditional computer vision techniques. The system was trained and evaluated on a large dataset of diverse hand gestures, demonstrating high accuracy and processing efficiency. Additionally, the system was put to the test in real-world scenarios, showcasing its potential for practical applications in gaming, sign language interpretation, and human-computer interaction. In addition, this paper presents a novel approach to American Sign Language (ASL) recognition using deep learning techniques, specifically Long Short-Term Memory (LSTM) networks. The proposed method involves collecting a dataset of ASL letters using a webcam and custom-built software that enables users to perform ASL gestures in front of the camera. The dataset was carefully designed with various variations, such as different hand shapes, orientations, and lighting conditions, to improve the recognition system's accuracy. The pre-processed images were then used to train an LSTM model, which achieved a remarkable 92% accuracy. In order to pre-process the photographs and extract important features, the Mediapipe module was employed, and it was found to be a reliable and efficient solution for handling the volume of data. The team's approach could be used to help deaf people and others who don't speak sign language communicate. The experimental results and implementation details provide a comprehensive understanding of the proposed approach and its potential for future developments in hand gesture detection and recognition using machine learning and Python.
M YogaM M RamyasriBhiksha RajG GokulE. K. Rithick PraveenSuguna Angamuthu
Ioana - Alexandra CostînLetiţia Mirea
K. Hemant Kumar ReddyMadarapu AshrithT.S. SindhuNagashetty B Kolar
Gurjot KaurNeha SharmaSonal MalhotraSwati DevliyalRupesh Gupta