JOURNAL ARTICLE

Inference-based Hierarchical Reinforcement Learning for Cooperative Multi-agent Navigation

Lijun XiaChao YuZifan Wu

Year: 2021 Journal:   2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI) Pages: 57-64

Abstract

This work aims to address the multi-agent cooperative navigation problem (MCNP), where multiple agents work together to occupy the landmarks in an environment without collision and with minimum time consumption. To this end, we propose an inference-based hierarchical reinforcement learning (IHRL) model, in which the high-level component infers the target allocation scheme among the agents and landmarks using a local message-passing algorithm, while the low-level component trains the sub-policy corresponding to the target assigned by the high-level component using traditional RL algorithms. The highlight of our model lies in the interplay of high-level inference based on the knowledge from learning and low-level learning with the results from inference. In this way, the overall learning efficiency can be improved by integrating more indicative information into the agents' coordinated learning process. Extensive experiments demonstrate the effectiveness of the proposed model.

Keywords:
Reinforcement learning Computer science Inference Artificial intelligence Machine learning

Metrics

2
Cited By
0.13
FWCI (Field Weighted Citation Impact)
50
Refs
0.50
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Robotic Path Planning Algorithms
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
Distributed Control Multi-Agent Systems
Physical Sciences →  Computer Science →  Computer Networks and Communications
© 2026 ScienceGate Book Chapters — All rights reserved.