Tenghai QiuShiguang WuZhen LiuZhiqiang PuJianqiang YiYuqian ZhaoBiao Luo
Inspired by psychological insights into individual behavior, we propose a novel cognition-oriented multiagent reinforcement learning (CORL) framework. CORL equips agents with two distinct types of cognition-situational and self-cognition-derived from local observations. To enhance the informativeness and precision of these cognition types, we introduce two information-theoretical regularizers: one to align situational cognition with the global state and the other to align self-cognition with each agent's identity for improved role differentiation and team coordination. In addition, the centralized training and decentralized execution framework is adopted to train the policy network. Our simulations demonstrate that CORL effectively harnesses local observations for enriched cooperation, leading to pronounced performance improvements, particularly in challenging tasks.
Boyin LiuZhiqiang PuYi PanJianqiang YiMin ChenShijie Wang
Keiki TakadamaShinichi NakasukaTakao Terano
Felipe Leno da SilvaRuben GlattAnna Helena Reali Costa
Md Abdus Samad KamalJunichi MurataKotaro Hirasawa
Weiheng DaiChengyang HeGuillaume Sartoretti