JOURNAL ARTICLE

Mapless navigation based on continuous deep reinforcement learning

Xing ChenLumei SuHoude Dai

Year: 2021 Journal:   2021 China Automation Congress (CAC) Vol: 39 Pages: 6758-6763

Abstract

This paper proposes a map-free navigation scheme based on continuous deep reinforcement learning to solve the problem that robots cannot flexibly avoid obstacles and navigate in a dynamic environment. The reinforcement learning algorithm used in this article is near-end strategy optimization (proximal strategy optimization, PPO), and the benchmark algorithm is the discrete deep reinforcement learning algorithm Deep Q network algorithm (Deep Q network, DQN).Experiments in the Gazebo simulation environment prove that the training efficiency and success rate of the PPO algorithm are much higher than that of the DQN algorithm. In this paper, the trained strategy model in the simulation environment is directly transplanted to the actual robot. The experimental results verify that the physical robot can have good navigation and obstacle avoidance capabilities without training again. The tested single-target navigation success rate is 80%, and the multi-target navigation success rate is 70%.

Keywords:
Reinforcement learning Computer science Benchmark (surveying) Obstacle avoidance Artificial intelligence Robot Scheme (mathematics) Obstacle Mobile robot Mathematics

Metrics

1
Cited By
0.06
FWCI (Field Weighted Citation Impact)
18
Refs
0.42
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Robotic Path Planning Algorithms
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
Autonomous Vehicle Technology and Safety
Physical Sciences →  Engineering →  Automotive Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.