JOURNAL ARTICLE

Active Grasp Synthesis for Grasping Unknown Objects

Brandon Call

Year: 2015 Journal:   Research Repository (Delft University of Technology)   Publisher: Delft University of Technology

Abstract

Manipulation is a key feature for robots which are designed to work in daily environments like homes, offices and streets. These robots do not often have manipulators that are specialized for specific tasks, but grippers that can grasp the target object. This makes grasping a crucial ability that enables many manipulation tasks. Robotic grasping is a complex process with various aspects: design of the gripper, detecting grasping points/regions that lead to a stable grasp (grasp synthesis), avoiding surrounding objects while executing the grasp (obstacle avoidance), detecting task related features of the object, altering the pose of the object to free up graspable regions (pre-grasp manipulation) are some of these aspects. In order to maintain a robust grasping system, all these aspects should work in harmony, aid each other and preferably cover each others mistakes. Among these aspects, vision based grasp synthesis for unknown objects forms a large portion of the robotic grasping literature. These algorithms deal with the problem of detecting grasping points or regions on a target object without an object shape model supplied a priori; instead they utilize visual information provided by the robot's sensors. The majority of these algorithms use one single image of the target object for grasp synthesis, and make implicit or explicit assumptions on the missing shape information of the target object. The missing information is a function of the shape of the object as well as the viewpoint of the vision sensor. So far in literature, there is no reliable grasp synthesis algorithm that can cope with the missing shape information and provide successful grasp synthesis for a large variety of objects and viewpoints. This thesis proposes a novel framework in which grasp synthesis process is coupled with active vision strategies in order to relax the assumptions on the viewpoint of the vision sensor and increase grasp success rate. Unlike prior work which considers grasp synthesis as a passive data analysis process that uses only the provided image of the target object, the proposed framework introduces strategies to improve the quality of the data by leading the sensor to viewpoints by which the grasp synthesis algorithms can generate higher quality grasps. With such a strategy, the burden of the grasp synthesis algorithms is shared with an active vision stage which boosts their success rates. Within the framework two novel methodologies are presented each of which utilizes a different active vision strategy. In the first methodology, local viewpoint optimization methods are analyzed; an extremum seeking control based optimization method is utilized to optimize the viewpoint of the sensor locally by maximizing the grasp quality value continuously. This methodology is easy to implement as it does not necessitate any prior training, but it has a risk of getting stuck at local optima. With this method up to 94\% success rate has been achieved for power grasps. However, it is observed that, noise on the grasp quality value and not being able to avoid local optima affect the performance negatively. In the second methodology, supervised learning algorithms are used to obtain an exploration policy. This strategy has a lower risk of getting stuck at local optima, but requires a training process. Furthermore, with this strategy, the information acquired during the process can be fused, and assumption on the missing object shape data can be relaxed significantly. The experimental results show that the strategy is superior than heuristic based and random search techniques in terms of both success rate and efficiency. With the proposed framework, we hope to encourage a new way of thinking about the grasp synthesis problem by introducing the use of active vision tools. We believe such an approach can have significant contribution for solving this challenging robotics problem.

Keywords:
GRASP Grippers Artificial intelligence Object (grammar) Computer science Computer vision Robot Human–computer interaction Engineering

Metrics

2
Cited By
0.32
FWCI (Field Weighted Citation Impact)
0
Refs
0.66
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Robot Manipulation and Learning
Physical Sciences →  Engineering →  Control and Systems Engineering
Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering
Robotic Path Planning Algorithms
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

BOOK-CHAPTER

GRASPING UNKNOWN OBJECTS

Frank AdeMartin RutishauserM. Trobina

Series in machine perception and artificial intelligence Year: 1995 Pages: 445-459
JOURNAL ARTICLE

Towards cognitive grasping: modeling of unknown objects and its corresponding grasp types

Hyoungnyoun KimInkyu HanBum-Jae YouJi‐Hyung Park

Journal:   Intelligent Service Robotics Year: 2011 Vol: 4 (3)Pages: 159-166
JOURNAL ARTICLE

Grasping of unknown objects via curvature maximization using active vision

Berk ÇallıMartijn WissePieter Jonker

Journal:   2011 IEEE/RSJ International Conference on Intelligent Robots and Systems Year: 2011 Pages: 995-1001
JOURNAL ARTICLE

Grasping of unknown objects via curvature maximization using active vision

Berk ÇallıMartijn WissePieter Jonker

Journal:   2011 IEEE/RSJ International Conference on Intelligent Robots and Systems Year: 2011
© 2026 ScienceGate Book Chapters — All rights reserved.