Nwakeze, Osita MiracleOkeke, Ogochukwu CMgbemfulike, Ike Joseph
This study presents an intelligent robotic object grasping system using computer vision technique and deep reinforcement learning to enhance robotic manipulation. The proposed technique employs You Only Look Once (YOLOv3) for real-time object recognition and localisation, while the Soft Actor-Critic (SAC) system uses depth image information to determine the optimal gripping areas. By transforming the gripping point into a three-dimensional grasping posture, the robotic manipulator can then efficiently choose and arrange objects. The COCO dataset was utilised to increase YOLO's detection accuracy, and transfer learning sped up the training process. The performance evaluation of the proposed system revealed a mean Average Precision (mAP) of 91.2% for item detection and an 87.3% grasping success rate. 10-fold cross-validation verified the model's robustness and generalisability, demonstrating minimal variation in performance across test settings. Compared to traditional gripping approaches, the proposed strategy improved accuracy by 27% and execution efficiency by 35%. These findings demonstrate the YOLO-SAC framework's promise for practical robotic applications by providing a flexible and scalable approach to automated object handling in a range of settings.
Osita Miracle NwakezeOgochukwu C OkekeIke Joseph Mgbemfulike
Nwakeze, Osita MiracleOkeke, Ogochukwu CMgbemfulike, Ike Joseph
Yaling ChenYan-Rou CaiMing-Yang Cheng
Musab CoşkunÖzal YıldırımYakup Demir