Muhammad Adi SulistyoDedy Kurnia Setiawan
Edge computing has emerged as a pivotal technology to address the demands of low-latency and high-bandwidth applications by processing data closer to the source. However, the dynamic nature of edge environments, characterized by fluctuating workloads and constrained resources, poses significant challenges for efficient resource allocation. Traditional heuristic-based approaches often fail to adapt to real-time variations, while existing reinforcement learning (RL) models struggle with the high-dimensional state and action spaces inherent in edge scenarios. This study proposes a novel deep reinforcement learning (DRL)-based algorithm tailored for dynamic resource allocation in edge computing. Key innovations include the development of a hierarchical or multi-agent DRL model to enhance coordination among decentralized edge nodes, the integration of transfer learning techniques for rapid adaptation to new environments, and the design of lightweight architectures optimized for resource-constrained edge devices. Experimental results demonstrate that the proposed algorithm outperforms traditional methods and state-of-the-art RL models in terms of efficiency, adaptability, and scalability, thereby contributing to the advancement of intelligent edge computing.
Degan ZhangHongrui FanJie Zhang
Jiawen CHUChunyun PanYafei WANGXiang YUNXuehua Li
Dapeng WuSijun WuYaping CuiAiling ZhongTong TangRuyan WangXinqi Lin
Yingying ChiYi ZhangYong LiuHailong ZhuZhe ZhengRui LiuPeiying Zhang