This paper proposes an efficient robot training method to navigate environments with static and dynamic obstacles and reach their goal autonomously using deep rein-forcement learning algorithms. Previous methods have focused on specific scenarios, such as crowded environments. However, in practice, the variety of scenarios with static and dynamic obstacles tends to make the real robotic system fail or be restricted. In this work, the training is performed in six scenarios per episode, whereas traditional methods only consider one scenario. Several neural networks are trained and compared based on the following metrics: success rate, collision rate, uncomfortable rate, travel time, and average travel distance. In addition, we conducted Gazebo simulations using ROS and experimental tests across four scenarios to demonstrate that our approach has a better performance compared to previous studies. The results show a greatly enhanced robot's ability to act in various situations, as shown in the following link: https://youtu.be/mvkMFZjlaQo
Tingxiang FanPinxin LongWenxi LiuJia Pan
Fei WangXiaoping ZhuZhou ZhouYang Tang
Do-Hyun ChunMyung-Il RohHye-Won LeeJisang HaDong Yu