This paper presents a novel approach to economic dispatch (ED) optimization in power systems through the application of Proximal Policy Optimization (PPO), an advanced reinforcement learning algorithm. The economic dispatch problem, a fundamental challenge in power system operations, involves optimizing the generation output of multiple units to minimize operational costs while satisfying load demands and technical constraints. Traditional methods often struggle with the non-linear, non-convex nature of modern ED problems, especially with increasing penetration of renewable energy sources. Our PPO-based methodology transforms the ED problem into a reinforcement learning framework where an agent learns optimal generator scheduling policies through continuous interaction with a simulated power system environment. The proposed approach is validated on a 15-generator test system with varying load demands and operational constraints. Experimental results demonstrate that the PPO algorithm achieves superior performance compared to conventional techniques, with cost reductions of up to 7.3% and enhanced convergence stability. The algorithm successfully handles complex constraints including generator limits, ramp rates, and spinning reserve requirements, while maintaining power balance with negligible error margins. Furthermore, the computational efficiency of the PPO approach allows for real-time adjustments to rapidly changing system conditions, making it particularly suitable for modern power grids with high renewable energy penetration.
Cong ZhangJunjie HouXiaoxi LvPei Zhang
Adil RizkiAchraf TouilAbdelwahed EchchatbiRachid Oucheikh
Yuewen SunXin YuanWenzhang LiuChangyin Sun