Muneer Al-HammadiGhulam MuhammadWadood AbdulMansour AlsulaimanMohamed A. BencherifMohamed Amine Mekhtiche
Recently, automatic hand gesture recognition has gained increasing importance for two principal reasons: the growth of the deaf and hearing-impaired population, and the development of vision-based applications and touchless control on ubiquitous devices. As hand gesture recognition is at the core of sign language analysis a robust hand gesture recognition system should consider both spatial and temporal features. Unfortunately, finding discriminative spatiotemporal descriptors for a hand gesture sequence is not a trivial task. In this study, we proposed an efficient deep convolutional neural networks approach for hand gesture recognition. The proposed approach employed transfer learning to beat the scarcity of a large labeled hand gesture dataset. We evaluated it using three gesture datasets from color videos: 40, 23, and 10 classes were used from these datasets. The approach obtained recognition rates of 98.12%, 100%, and 76.67% on the three datasets, respectively for the signer-dependent mode. For the signer-independent mode, it obtained recognition rates of 84.38%, 34.9%, and 70% on the three datasets, respectively.
Taranya ElangovanArockia Xavier Annie RayanKeerthana SundaresanJ. D. Pradhakshya
Yash DhamechaRajat PawarAditi WaghmareSoma Ghosh
Mehedi Noor KhanKoushik AhmedArafat Kabir