The estimation of grasp states in myoelectric prosthetic hands is relevant for ergonomic interfacing, control and rehabilitation initiatives. In this paper we evaluate the possibility to infer the grasp state of a prosthetic hand from RGB frames by using well-known deep learning architectures in testing scenarios involving variations of brightness, contrast and flips. Our results show the feasibility, the attractive accuracy and efficiency to estimate prosthetic hand poses with a GoogLeNet-based deep architecture using relatively few training frames.
Yazan M. DweiriMohammad M. AlAjlouniJawdat R. AyoubAlaa Y. Al-ZeerAli H. Hejazi
Beatriz LeónCarlos RubertJoaquín L. Sancho-BruAntonio Morales
Zengzhi ZhaoWeiwei ShangHaoyuan HeZhijun Li
Xiaobao DengXiaogang DuanHua Deng
Ghazal GhazaeiAli AlameerPatrick DegenaarGraham MorganKianoush Nazarpour