JOURNAL ARTICLE

Curiosity-driven Exploration for Cooperative Multi-Agent Reinforcement Learning

Abstract

In multi-agent reinforcement learning, exploration is more challenging because of the large state-action space and the requirement of fine cooperation among multiple agents. We extend ICM, a curiosity-driven exploration method for single-agent environments, to the multi-agent setting and propose multi-agent curiosity-driven exploration (MACDE). We define our intrinsic reward with respect to the curiosity for a team of agents as the summation of individual agents' curiosity given by the prediction error in the next state considering other agents' actions. We evaluate MACDE in the Predator-Prey and StarCraft Multi-Agent Challenge. The results show that MACDE worked effectively and learned better policies in both environments.

Keywords:
Curiosity Reinforcement learning Computer science Action (physics) State (computer science) Space (punctuation) Reinforcement Artificial intelligence Psychology Social psychology

Metrics

1
Cited By
0.26
FWCI (Field Weighted Citation Impact)
45
Refs
0.55
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
Artificial Intelligence in Games
Physical Sciences →  Computer Science →  Artificial Intelligence
Evolutionary Game Theory and Cooperation
Social Sciences →  Social Sciences →  Sociology and Political Science
© 2026 ScienceGate Book Chapters — All rights reserved.