The evolution of the Internet of Things (IoT) has instigated a dramatic surge in interconnected devices, leading to the generation of substantial data. This immense data exchange poses significant challenges to traditional cloud computing models due to latency, network congestion, and energy constraints. Device-to-device (D2D) communication offers a more efficient alternative for direct data transfer, while Mobile Edge Computing (MEC) emerges as a promising solution for handling IoT-generated data by enabling computation at the network's edge, significantly reducing latency. However, making the right task offloading decision – determining where to process IoT tasks, either at the device or the MEC server – becomes critical. To address this, our study introduces a novel approach to IoT task offloading using Double Deep Q-Learning Networks (DDQN). This strategy optimizes the offloading decision to strike a balance between system performance and resource utilization in D2D MEC environments. According to our simulation results, the DDQN strategy yields remarkable improvements, reducing energy consumption and cost by approximately 8.9% and 10.8% respectively, and improving latency by about 5.5% compared to other methods. This significant enhancement illustrates the DDQN's potential for efficient IoT ecosystem management, emphasizing its role in optimizing IoT task offloading in D2D MEC settings.
Paul AgbajeEbelechukwu NwaforHabeeb Olufowobi
S. ManishAbhishek M NH T MallikarjunKeerthan Kumar T. G.
Priyadarshni PriyadarshniDhruvan KadavalaShivani TripathiPraveen KumarRajiv Misra