This paper proposes a mapless visual navigation for autonomous mobile robot. Conventional robot navigation deals with two issues: self-localization and path planning. To solve these two issues, the navigation method is generally based on environment maps. However, these maps need significant effort to be created, updated and expanded. Hence it is impractical to create a map covering all areas where the robot travels. This paper presents a novel mapless visual navigation framework with two-modes, i. e., global and local navigation. This framework allows navigation to the target location by simply specifying the name and image of the landmark. The module for global navigation determines the path based on object detection from the name and image of landmarks and local navigation is based on calculating the relative position and orientation from the target image. Also, deep Convolutional Neural Network (CNN) is applied to both modules. Since Deep Learning (DL) methods have the built-in ability to generalize, our proposed method is expected to navigate in first-time-seen environments after learning with a large amount of training data. The evaluation of the modules of the two navigation modes performed by an actual robot shows the feasibility of our new approach of visual mapless navigation.
Walead Kaled SleamanSırma Yavuz
Yao YeboahYanguang CaiWei WuShuai He
Rituvika NarulaUrfi KhanNathi Ram Chauhan
Nathi Ram ChauhanUrfi KhanRituvika Narula