Shehan CalderaAlexander RassauDouglas Chai
Recent advancements in Deep Learning have accelerated the capabilities of robotic systems in terms of visual perception, object manipulation, automated navigation, and human-robot collaboration. This paper proposes the use of a transfer learning technique with deep convolutional neural networks to learn how to visually identify the grasping configurations for a parallel plate gripper that will be used to grasp various household objects. The Red-Green-Blue-Depth (RGB-D) data from the Cornell Grasp Dataset is used to train the network model using an end-to-end learning method. With this method, we achieve a grasping configuration prediction accuracy of 93.91%.
Pragya GoyalPriya ShuklaG. C. Nandi
Siyuan PiHong TangYingying LiNanfeng Xiao
Aniket GhodakePrakash UttamB. B. Ahuja
Changliang SunYuanlong YuHuaping LiuJason Gu