An essential capability for mobile robots is to navigate autonomously in natural or human environments. This includes that a robot has to be able to determine its own position within its environment (self-localization), particularly with respect to the location of features relevant to the fulfilment of its defined task (localization of destination), and find the paths necessary to reach the destination (path planning). In this context, we present a new strategy for mobile robots to determine their position and orientation with respect to visual landmarks. In our case, the robot's position is not estimated with high accuracy. Instead, its estimation is improved repeatedly by analysing the landmarks from different positions, by exploiting the robots motion. Therefore, we only use the visual information given by a single camera, without reverting to a model of the environment, the robot or even the visual system. After a first location estimation, the robot tracks its position with the help of an unscented Kalman filter (UKF), which does not require derivations of the nonlinear system or measurement function. As experiments show, the accuracy of the chosen strategy is sufficient to move to a defined goal without the need of high computational power.
A. CurranKostas J. Kyriakopoulos
Abdul BaisRobert SablatnigJason Gu
A. CurranKostas J. Kyriakopoulos