To address the issues of slow convergence and poor interpretability, this paper proposes a novel hierarchical reinforcement learning framework consisting of an upper-level macro-decision model and a lower-level micro-execution model. To enable agents to explore in an orderly manner, expert knowledge is incorporated into the framework to design explainable subtasks. Furthermore, a hierarchical multi-agent reinforcement learning algorithm with explainable subtasks is developed and evaluated in the SC2LE scenario. Experimental results show that the proposed algorithm outperforms the traditional MARL approach in complex scenarios involving heterogeneous agents' cooperation, effectively solves the multi-agent behavior interpretability challenge, and significantly improves the training convergence speed.
Yanbo LiuWeiqi SunWenchao XuXin XiongHao LiLing Qu
Rajbala MakarSridhar MahadevanMohammad Ghavamzadeh
Mohammad GhavamzadehSridhar MahadevanRajbala Makar
Yuanyue WangXiaoming WangQing ZhuQuan YuanGuiyang LuoJinglin Li