This work investigates the application of the Visual-Inertial SLAM (Simultaneous Localization and Mapping) algorithm for the navigation of an autonomous robot. With the development of autonomous navigation technologies, there is a need for effective methods that allow robots to navigate in complex environments. Visual-Inertial SLAM combines data from cameras and inertial measurement devices, which ensures high accuracy of localization and map construction. Several SLAM approaches based on particle filter, extended Kalman filter, and neural networks have been investigated over the decades. These approaches are designed to construct maps in a variety of representations, including landmark, surface, polygon grid, and occupancy grid representations. They are now widely used for applications such as robot motion planning in unknown environments and are used in self-driving cars, unmanned aerial vehicles, and autonomous underwater vehicles. This work aims to simultaneously localize and map a robot in an unknown indoor environment using IMU odometry data and special points detected by a stereo camera. To achieve this goal, an extended approach based on the Kalman filter is implemented. The Kalman filter is an optimal linear estimator for linear system models with additive independent white noise in both forecasting and observational systems. The extended Kalman filter adapted computational methods, namely multivariate Taylor series expansions, to linearize the nonlinear model. The proposed solutions can be useful for the improvement of autonomous systems such as delivery robots, drones and mobile platforms, as well as for the development of new methods of integrating sensor data in modern navigation technologies.
Magdy AwadMoamen MohamedNaguib Saleh
Vigneshwaran. KRashmi Ranjan DasDeepika Rani Sona
Jiajun ChenFei XieGuisheng YaoShuai He