Cameras are popular sensors for robot navigation tasks such as localization as they are inexpensive, lightweight, and provide rich data. However, fast movements of a mobile robot typically reduce the performance of vision-based localization systems due to motion blur. In this paper, we present a reinforcement learning approach to choose appropriate velocity profiles for vision-based navigation. The learned policy minimizes the time to reach the destination and implicitly takes the impact of motion blur on observations into account. To reduce the size of the resulting policies, which is desirable in the context of memory-constrained systems, we compress the learned policy via a clustering approach. Extensive simulated and real-world experiments demonstrate that our learned policy significantly outperforms any policy that uses a constant velocity. We furthermore show, that our policy is applicable to different environments. Additional experiments demonstrate that our compressed policies do not result in a performance loss compared to the originally learned policy.
Armin HornungMaren BennewitzHauke Strasdat
Jungho KimChaehoon ParkIn So Kweon
Jonas KulhanekErik DernerTim de BruinRobert Babuska