Abstract

We propose a novel pipeline for unknown object grasping in shared robotic autonomy scenarios. State-of-the-art methods for fully autonomous scenarios are typically learning-based approaches optimised for a specific end-effector, that generate grasp poses directly from sensor input. In the domain of assistive robotics, we seek instead to utilise the user's cognitive abilities for enhanced satisfaction, grasping performance, and alignment with their high level task-specific goals. Given a pair of stereo images, we perform unknown object instance segmentation and generate a 3D reconstruction of the object of interest. In shared control, the user then guides the robot end-effector across a virtual hemisphere centered around the object to their desired approach direction. A physics-based grasp planner finds the most stable local grasp on the reconstruction, and finally the user is guided by shared control to this grasp. In experiments on the DLR EDAN platform, we report a grasp success rate of 87% for 10 unknown objects, and demonstrate the method's capability to grasp objects in structured clutter and from shelves.

Keywords:
Artificial intelligence Robotics Computer science Computer vision Object (grammar) Medical robotics Human–computer interaction Robot

Metrics

2
Cited By
1.27
FWCI (Field Weighted Citation Impact)
31
Refs
0.72
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Robot Manipulation and Learning
Physical Sciences →  Engineering →  Control and Systems Engineering
Soft Robotics and Applications
Physical Sciences →  Engineering →  Biomedical Engineering
Teleoperation and Haptic Systems
Physical Sciences →  Engineering →  Mechanical Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.