Efficient resource allocation in Internet of Things (IoT) networks integrated with edge computing capabilities is critical for optimizing system performance, reducing latency, and managing the complex interplay between heterogeneous devices. This paper proposes a novel reinforcement learning-based framework to dynamically allocate resources in IoT edge computing networks. By leveraging deep reinforcement learning (DRL), the proposed approach models the intricate relationships between varying workloads, device heterogeneity, and fluctuating network conditions. The framework employs adaptive task offloading strategies and real-time decision-making to improve resource utilization, reduce energy consumption, and enhance Quality of Service (QoS). Importantly, this interdisciplinary method integrates advancements in artificial intelligence, distributed computing, and network optimization to address challenges across IoT-enabled domains such as smart cities, healthcare, and industrial automation. Experimental evaluations on benchmark IoT scenarios demonstrate that the DRL-based method significantly outperforms traditional optimization techniques in terms of computational efficiency, scalability, and robustness. The findings underscore the potential of reinforcement learning in tackling complex resource allocation challenges in IoT edge computing networks, paving the way for smarter, more adaptive network management solutions.
Xie QianyuXutao YangLaixin ChiXuejie ZhangJixian Zhang
Yanhao ZhangNalam Venkata AbhishekMohan Gurusamy