Abstract

This paper presents a study on the mapless navigation of autonomous mobile robot using Deep Reinforcement Learning in an intralogistics setting. The task is to make an autonomous mobile robot learn to navigate to a goal without prior knowledge of the environment. In this paper, a controller using the Soft Actor-Critic algorithm is designed, trained, and applied for navigating the robot equipped with $360^{\mathrm{o}}$ LiDAR and front camera sensors. The controller is successfully validated in an almost fully observable environment under extensive simulations. Furthermore, we investigate the performance of the proposed controller in a partially observable environment and possible limitations. We use a 3D Temporal Convolution Network for processing the time series image data from visual observations. Besides Partial Observability, we also address the problem of sparse positive rewards in training the Deep Reinforcement Learning algorithm with a combined approach of Automatic Curriculum Learning and Dual Prioritized Experience Replay.

Keywords:
Reinforcement learning Computer science Observability Mobile robot Artificial intelligence Controller (irrigation) Robot Task (project management) Computer vision Engineering

Metrics

1
Cited By
0.26
FWCI (Field Weighted Citation Impact)
22
Refs
0.58
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Vision and Imaging
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.