JOURNAL ARTICLE

Distributed Optimal Energy Dispatch for Networked Microgrids with Federated Reinforcement Learning

Abstract

We investigate an optimal distributed energy dispatch strategy for networked Microgrids (MGs) considering uncertainties of distributed energy resources, the impact of energy storage, and privacy. The energy dispatch problem is formulated as a Partially Observed Markov Decision Process (POMDP), and is solved using Deep Deterministic Policy Gradient (DDPG) method. To reduce the communication load and protect privacy, a federated reinforcement learning (FRL) framework is proposed, where each MG trains model parameters with its own local data, and only transmits model weights to the global server. Finally, each MG can obtain a global model that can be generalized well in various cases. The proposed method is communication-efficient, privacy-preserving, and scalable. Numerical simulations are tested with real-world datasets, results demonstrate the effectiveness of the proposed FRL method.

Keywords:
Reinforcement learning Computer science Markov decision process Scalability Markov process Distributed computing Process (computing) Energy (signal processing) Mathematical optimization Partially observable Markov decision process Artificial intelligence Mathematics

Metrics

3
Cited By
0.75
FWCI (Field Weighted Citation Impact)
10
Refs
0.68
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Microgrid Control and Optimization
Physical Sciences →  Engineering →  Control and Systems Engineering
Electric Vehicles and Infrastructure
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Smart Grid Energy Management
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.