JOURNAL ARTICLE

Visual Navigation for Obstacle Avoidance Using Deep Reinforcement Learning with Policy Optimization

Abstract

In order for mobile robots to navigate through indoor environments without colliding, they must be able to detect and avoid obstacles. In recent years, memory-based Deep Reinforcement Learning approaches have gained increasing popularity as they enable robots to navigate more safely as a result of a long sequence of information acquired over time. As the indoor environment has a partial observability problem due to its complexity, obtaining sufficient information from its properties remains an ongoing challenge. To retain relevant information about the structure of the environment over time, LSTM has been incorporated into a network architecture to implement a memory-based Deep Reinforcement Learning method. As part of this method, LiDAR data are fused with grayscale images captured by a monocular camera to obtain the information necessary to identify obstacles, and the final features are used to develop an effective obstacle avoidance policy. According to the assessment results, the approach demonstrated remarkable performance in terms of average accumulated reward and success rate.

Keywords:
Computer science Reinforcement learning Obstacle avoidance Artificial intelligence Observability Mobile robot Robot Motion planning Computer vision Obstacle Deep learning Machine learning

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
20
Refs
0.20
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Robotic Path Planning Algorithms
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
Autonomous Vehicle Technology and Safety
Physical Sciences →  Engineering →  Automotive Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.