JOURNAL ARTICLE

Aggregation Transfer Learning for Multi-Agent Reinforcement learning

Abstract

Multi-agent reinforcement learning is currently mainly used in many real-time strategy games. For example, StarCraft, UAV combat. Multi-agent reinforcement learning algorithms have attracted widespread attention. In large-scale multi-agent environment, there is still the problem of state space explosion. Especially in transfer training, since the network input size is fixed, the existing network structure is difficult to adapt to large-scale scenario transfer training. In this paper, we use aggregation transfer training for multi-agent combat problems from aerial unmanned aerial vehicle (UAV) combat scenarios to extend the small-scale learning to large-scale and complex scenarios. We combine the graph neural network (GNN) with the MADDPG algorithm to process the agent observation with aggregation function and take it as input. It starts training from a small-scale multi-UAV combat scenario, gradually increases the number of UAV. The experimental results indicate that MADDPG methods for multi-agent UAV combat problems trained via aggregation transfer learning are able to reach the target performance more quickly, provide superior performance, compared with ones trained without aggregation transfer learning. The versatility of the confrontation model has also been improved.

Keywords:
Reinforcement learning Computer science Transfer of learning Artificial intelligence Scale (ratio) Process (computing) Artificial neural network Graph Machine learning Theoretical computer science

Metrics

9
Cited By
0.99
FWCI (Field Weighted Citation Impact)
39
Refs
0.81
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
Guidance and Control Systems
Physical Sciences →  Engineering →  Aerospace Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.