Enabling Micro Aerial Vehicles (MAVs) with semi-autonomous capabilities to assist their teleoperation is crucial in several applications. Remote human operators do not have, in general, the situational awareness to perceive obstacles near the drone, nor the readiness to provide commands to avoid collisions. In this work, we devise a novel teleoperation setting that asks the operator to provide a simple high-level signal encoding the speed and the direction they expect the drone to follow. We then endow the MAV with an end-to-end Deep Reinforcement Learning (DRL) model that computes control commands to track the desired trajectory while performing collision avoidance. Differently from State-of-the-Art (SotA) works, it allows the robot to move freely in the 3D space, requires only the current RGB image captured by a monocular camera and the current robot position, and does not make any assumption about obstacle shape and size. We show the effectiveness and the generalization capabilities of our strategy by comparing it against a SotA baseline in photorealistic simulated environments.
Brilli, RaffaeleLegittimo, MarcoCrocetti, FrancescoLeomanni, MirkoFravolini, Mario LucaCostante, Gabriele
Brilli, RaffaeleLegittimo, MarcoCrocetti, FrancescoLeomanni, MirkoFravolini, Mario LucaCostante, Gabriele
Ben ParsonageMassimiliano VasileLuis Enrique SánchezMilad FarsiNasser L. Azad
Róbert Adrian RillKinga Bettina Faragó
Tetsu YamaguchiTomoyasu ShimadaXiangbo KongHiroyuki Tomiyama