Visual navigation in complex environments is crucial for intelligent agents. In this paper, we propose an efficient deep reinforcement learning (DRL) method to tackle visual navigation tasks. We present the synchronous advantage actor-critic (A2C) with generalized advantage estimator (GAE) algorithm. The A2C enables agents to learn from multiple processes, which significantly reduces the training time. The GAE used to estimate the advantage function improves the policy gradient estimates. We focus on visual navigation tasks in ViZDoom, and train agents in two health gathering scenarios. The experimental results show this method successfully teaches our agents to navigate in these scenarios. The A2C with GAE agent reaches the highest score in the first task, and a competitive score in the second task. In addition, this agent has better average scores and lower variances in both tasks.
Jiaohao ZhengMehmet Necip KurtXiaodong Wang
Chen ZhongM. Cenk GursoySenem Velipasalar
Jiaohao ZhengMehmet Necip KurtXiaodong Wang