JOURNAL ARTICLE

Robotic Grasp Pose Detection Using Deep Learning

Abstract

Recent advancements in Deep Learning have accelerated the capabilities of robotic systems in terms of visual perception, object manipulation, automated navigation, and human-robot collaboration. This paper proposes the use of a transfer learning technique with deep convolutional neural networks to learn how to visually identify the grasping configurations for a parallel plate gripper that will be used to grasp various household objects. The Red-Green-Blue-Depth (RGB-D) data from the Cornell Grasp Dataset is used to train the network model using an end-to-end learning method. With this method, we achieve a grasping configuration prediction accuracy of 93.91%.

Keywords:
GRASP Artificial intelligence Computer science Deep learning Convolutional neural network Computer vision RGB color model Robot Transfer of learning Grippers Object detection Perception Object (grammar) Human–computer interaction Pattern recognition (psychology) Engineering

Metrics

5
Cited By
0.38
FWCI (Field Weighted Citation Impact)
31
Refs
0.63
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Robot Manipulation and Learning
Physical Sciences →  Engineering →  Control and Systems Engineering
Soft Robotics and Applications
Physical Sciences →  Engineering →  Biomedical Engineering
Hand Gesture Recognition Systems
Physical Sciences →  Computer Science →  Human-Computer Interaction
© 2026 ScienceGate Book Chapters — All rights reserved.