In this paper, we propose a maples navigation for robot with deep reinforcement learning and continuous actions, in order to investigate the effect of continuous actions for robots mapless navigation. Assuming that the positions of robots can be easily obtained by indoor localization system, the robot agent are trained in a simulation environment to learn mapless navigation policy by taking only obstacle distances and relative positions to the target. Considering the state of robot motion in real world, properly limited range of continuous actions are given to the agent to choose. So the agent output steering angle and moving distance in the range of limitation. We valid that continuous actions allow the agent to have richer explorations, flexible movements and thus higher possibility to reach the navigation target, by experiments of comparing with traditional discrete actions.
Shoaib Mohd NastiZahoor Ahmad NajarMohammad Ahsan Chishti
Enrico MarchesiniAlessandro Farinelli
Jiajun WuWeihao ChenJiaming JiXing ChenLumei SuHoude Dai