In order for mobile robots to navigate through indoor environments without colliding, they must be able to detect and avoid obstacles. In recent years, memory-based Deep Reinforcement Learning approaches have gained increasing popularity as they enable robots to navigate more safely as a result of a long sequence of information acquired over time. As the indoor environment has a partial observability problem due to its complexity, obtaining sufficient information from its properties remains an ongoing challenge. To retain relevant information about the structure of the environment over time, LSTM has been incorporated into a network architecture to implement a memory-based Deep Reinforcement Learning method. As part of this method, LiDAR data are fused with grayscale images captured by a monocular camera to obtain the information necessary to identify obstacles, and the final features are used to develop an effective obstacle avoidance policy. According to the assessment results, the approach demonstrated remarkable performance in terms of average accumulated reward and success rate.
Pooyan Rahmanzadeh GerviAhad HaratiSayed Kamaledin Ghiasi-Shirazi
Keishi KominamiTomohito TakuboKenichi OharaYasushi MaeTatsuo Arai
Wendong XiaoLiang YuanLi HeTeng RanJianbo ZhangJianping Cui