Implementing energy management (EM) programs in modern power systems is pivotal in optimizing energy consumption, enhancing grid stability, and reducing operational costs. However, the success of these EM programs is often influenced by the rebound effect, where the electricity load returns to its previous high levels after an initial reduction, mitigating the intended energy-saving benefits. Addressing this challenge, this research proposes a novel approach for network reconfiguration (NRC) using Deep Reinforcement Learning (DRL) with a Dueling Deep Q-Network (DDQN) algorithm. Our methodology leverages DDQN to dynamically adapt the network configuration to changing electricity load patterns influenced by EM programs. The NRC process is formulated as a DRL problem, where the agent learns to make optimal decisions regarding minimizing operational costs. Studies results on the IEEE 33 bus system demonstrate the effectiveness of the proposed DDQN-based NRC approach in mitigating the destructive impacts of the rebound effect.
Mukesh GautamMohammed Benidris
Bincheng ZhaoXueshan HanYiran MaZhiqi Li
Seungchan JoJae-Young OhYong Tae YoonYoung Gyu Jin
Ognjen KundačinaPredrag M. VidovićMilan R. Petković