JOURNAL ARTICLE

Visual Navigation with Actor-Critic Deep Reinforcement Learning

Abstract

Visual navigation in complex environments is crucial for intelligent agents. In this paper, we propose an efficient deep reinforcement learning (DRL) method to tackle visual navigation tasks. We present the synchronous advantage actor-critic (A2C) with generalized advantage estimator (GAE) algorithm. The A2C enables agents to learn from multiple processes, which significantly reduces the training time. The GAE used to estimate the advantage function improves the policy gradient estimates. We focus on visual navigation tasks in ViZDoom, and train agents in two health gathering scenarios. The experimental results show this method successfully teaches our agents to navigate in these scenarios. The A2C with GAE agent reaches the highest score in the first task, and a competitive score in the second task. In addition, this agent has better average scores and lower variances in both tasks.

Keywords:
Reinforcement learning Computer science Task (project management) Estimator Artificial intelligence Focus (optics) Machine learning Engineering Mathematics

Metrics

13
Cited By
0.79
FWCI (Field Weighted Citation Impact)
32
Refs
0.77
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Artificial Intelligence in Games
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.