DISSERTATION

Reinforcement Learning For Adaptive Distribution Network Reconfiguration

Abstract

The increasing demand for electricity driven by the widespread adoption of electric vehicles necessitates effective distribution network reconfiguration methods. However, existing distribution network reconfiguration approaches often rely on precise network parameters, leading to scalability and optimality challenges. To overcome these issues, this thesis proposes a data-driven reinforcement learning-based algorithm for distribution network reconfiguration which is divided into three parts. In the first part, five reinforcement learning algorithms, including deep Q-learning, dueling deep Q-learning, deep Q-learning with prioritized experience replay, soft actor-critic, and proximal policy optimization, are compared for the distribution network reconfiguration problem in 33- and 136-node test systems. Additionally, a new deep Q-learning-based action sampling method is introduced to reduce the action space size and optimize system loss reduction. In the second stage of this research, an innovative action-space sampling method is developed which utilizes a graph theory-based algorithm named Yamada-Kataoka-Watanabe to find all minimum spanning trees in the network structure as an undirected graph. Subsequently, powerflow analysis is conducted for all these spanning tree structures to rank them from the most optimal to the least in terms of system loss. Notably, this new sampling method stands out from the previous deep Q-learning-based approach as it offers greater versatility and can be seamlessly applied to any test system. This method is applied to the 33-, 119-, and 136-node test systems. Comparative analysis against conventional methods demonstrated the effectiveness, scalability, and efficiency of the proposed method in reducing system losses and managing electricity demand effectively. While reinforcement learning methods offer fast decision-making capabilities, the lack of transparency in their decision processes hinders their application in critical decision-making scenarios. In particular, distribution network reconfiguration involves altering switch states, which can significantly impact switch lifespan, requiring careful consideration. To address this transparency issue, in the third part of this study, a novel approach is introduced that employs an explainer neural network to analyze and interpret reinforcement learning-based decisions in distribution network reconfiguration. The explainer network is trained using the reinforcement learning agent's decisions, considering active and reactive power of the buses as inputs and generating line states as outputs. Later, the utilization of attribution methods enabled a deeper understanding of the relationship between inputs and outputs, offering valuable insights into the agent's decision-making process. Overall, this thesis presents a comprehensive and innovative approach to distribution network reconfiguration leveraging data-driven reinforcement learning algorithms for decision making, graph theory-based action sampling for improving the optimality of decisions, and an explainer neural network for decision interpretation.

Keywords:
Reinforcement learning Control reconfiguration Scalability Graph Sampling (signal processing) Tree (set theory) Minimum spanning tree

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Optimal Power Flow Distribution
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Power System Optimization and Stability
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Smart Grid Energy Management
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.