JOURNAL ARTICLE

Cooperative Multi-Agent Deep Reinforcement Learning for Adaptive Decentralized Emergency Voltage Control

Abstract

Under voltage load shedding (UVLS) for power grid emergency control builds the last defensive perimeter to prevent cascade outages and blackouts in case of contingencies. This letter proposes a novel cooperative multi-agent deep reinforcement learning (MADRL)-based UVLS algorithm in an adaptive decentralized way. With well-designed input signals reflecting the voltage deviation, newly structured neural networks are developed as intelligent agents to obtain control actions and their probabilities to accommodate high uncertainties in volatile power system operations. Moreover, the interaction among the agents for coordinated control is implemented and refined by a state-of-the-art attention mechanism, which helps agents concentratively learn effective interacted information. The proposed method realizes decentralized coordinated control, adapting to extremely high uncertainties. Case studies on an IEEE benchmark system indicate the superior performance of the proposed algorithm.

Keywords:
Reinforcement learning Computer science Multi-agent system Adaptive control Decentralised system Artificial intelligence Control (management)

Metrics

1
Cited By
0.37
FWCI (Field Weighted Citation Impact)
17
Refs
0.49
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Optimal Power Flow Distribution
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Power System Optimization and Stability
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Smart Grid Security and Resilience
Physical Sciences →  Engineering →  Control and Systems Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.