S AiswaryaAngelina GeethaKritika Ramesh
The number of devices linked to the Internet is continuously rising along with the development of the Internet of Things (IoT). The IoT and the expanding volume of data it communicates place constraints on cloud-based data processing and storage. Both fog and cloud computing allow users to store apps and data, but Fog has a broader geographic reach and is closer to the end user. Managing rapidly changing resource provisioning and allocation of resources in fog computing will create new challenges when developing IoT applications and satisfying user requests. To control resource consumption and Service Level Agreements (SLA), flexible and often autonomous systems must choose the appropriate virtual resources. This work presents a Deep Reinforcement Learning (DRL) based structure for resource provisioning for improving resource management efficiency in IoT ecosystems. A Deep Neural Network (DNN) is used for assessing value functions, and it allows for better compliance to diverse conditions, learning from prior sensible approaches, and acting as a self-learning adaptive system. Using the DRL algorithm and the Proximal Policy Optimization (PPO), IoT services can be established by reducing average consumption of energy and latency, cutting expenses, and wisely utilising and allocating resources. Simulations with the iFogSim show that the PPO policy increases utilization, reduces delay rates, and maintains acceptable service quality while reducing energy consumption and increasing utilization under varying loading rates.
Yifan ChenShaomiao ChenKuan‐Ching LiWei LiangZhiyong Li
Yizhe ChenEnmiao FengZhipeng Ling
Zhaoying WangYifei WeiZhiyong FengF. Richard YuZhu Han