JOURNAL ARTICLE

Provably Efficient Multi-Agent Reinforcement Learning with Fully Decentralized Communication

Abstract

A challenge in reinforcement learning (RL) is minimizing the cost of sampling associated with exploration. Distributed exploration reduces sampling complexity in multi-agent RL (MARL). We investigate the benefits to performance in MARL when exploration is fully decentralized. Specifically, we consider a class of online, episodic, tabular Q-learning problems under time-varying reward and transition dynamics, in which agents can communicate in a decentralized manner. We show that group performance, as measured by the bound on regret, can be significantly improved through communication when each agent uses a decentralized message-passing protocol, even when limited to sending information up to its γ-hop neighbors. We prove regret and sample complexity bounds that depend on the number of agents, communication network structure and γ. We show that incorporating more agents and more information sharing into the group learning scheme speeds up convergence to the optimal policy. Numerical simulations illustrate our results and validate our theoretical claims.

Keywords:
Reinforcement learning Regret Computer science Convergence (economics) Distributed computing Class (philosophy) Multi-agent system Mathematical optimization Artificial intelligence Machine learning Mathematics

Metrics

3
Cited By
0.87
FWCI (Field Weighted Citation Impact)
35
Refs
0.63
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Game Theory and Applications
Social Sciences →  Decision Sciences →  Management Science and Operations Research
Advanced Bandit Algorithms Research
Social Sciences →  Decision Sciences →  Management Science and Operations Research
Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.