Intricacy is one of the challenges associated with robotic hand systems. By offering simple and efficient systems, the chance of utilizing them being rejected is reduced. The study aims to develop a deep learning-based model for automated detection of the suitable grasp form of objects. The methodology was developed using the U-Net model and image processing method, in addition to five distinct classification models. The data source is a data collection referred to as ALOI for coloured small objects. By using ground-truth labelling in four grasp forms and substituting different backgrounds, the issue of there being only black backgrounds was overcome. Two experiments have been conducted on the dataset. The accuracy and intersection over union of the segmentation algorithm, as well as the accuracy, sensitivity, specificity, and precision of the classification models, were measured to evaluate their performance. The proposed system paradigm has yielded reliable results. In particular, employing the proposed segmentation algorithm produced a significant improvement in the performance and efficacy of all five models.
Pooja ViswanathanTristram SoutheyJames J. LittleAlan K. Mackworth
Róbinson Jiménez MorenoMauricio MauledouxB. Martinez
A GashiDrin KrasniqiErjon ShalaXhevahir BajramiRamë Likaj
Xiangting CaiXin XuShuai RenYifei Shi