Internet of Things (IoT) technology is being increasingly used in the smart grid to enable real-time monitoring, control, and optimization of power systems. With the Internet as processing centre for IoT, various physical devices such as sensors, meters, and controllers, it enables them to exchange information and interact among each other. In an IoT system, edge computing provides a decentralized computing environment where data can be processed close to the source, reducing latency and network traffic. By integrating IoT edge computing, the smart grid system can collect real-time data on grid operations and use this information to optimize its performance. However, the large volume of data generated by IoT devices in a smart grid system can cause delays in processing and decision-making. To overcome this challenge, delay optimization techniques such as workload allocation is needed to ensure that the smart grid system can quickly and accurately process the data it receives from IoT devices and make decisions based on that data. By using deep reinforcement learning (DRL), we can implement and evaluate the proposed method which is the workload allocation technique. DRL algorithms can learn to decide which action is the best based on the situation of the grid, optimizing delay without the knowledge of the system itself. By using edge-computing as proposed technique, it is proven that it can outperform the cloud computing method based on the result from the simulation.
Meng LiF. Richard YuPengbo SiWenjun WuYanhua Zhang
Yingying ChiYi ZhangYong LiuHailong ZhuZhe ZhengRui LiuPeiying Zhang