Multi-access Edge Computing (MEC) is an emerging and promising computing paradigm that distributes computational resources closer to users at the network edge, effectively reducing both computation and communication latency. In practical Internet of Things (IoT) systems, many applications consist of interdependent subtasks. Therefore, determining how to offload these tasks while maintaining their dependencies to minimize latency becomes a challenging problem. Particularly in a dynamic environment with multiple users and MEC servers. Most existing studies rely on heuristic approaches, which lack adaptability in dynamic MEC environments, while machine learning-based methods often overlook task dependencies. Unlike previous work, our research focuses on the problem of offloading dependent tasks in multi-user, multi-MEC server scenarios. In this article, we first model the dependent task offloading problem as a Markov Decision Process (MDP). Then we propose a deep reinforcement learning (DRL)-based framework called GDDTO, with the aim of reducing task completion time. Specifically, this framework employs a Graph Convolutional Network (GCN) to extract task dependencies and dynamic MEC environment features, combined with a Double Deep Q-learning Network (DDQN) model and an optimized experience replay mechanism to select and evaluate task offloading strategies. Finally, comparative experiments demonstrate that this method significantly reduces task completion latency across various scenarios, proving its effectiveness.
Bo XieHaixia CuiYejun HeMohsen Guizani
Liqiong ChenXinyuan YangHuaiying SunXudong YuKaiwen Zhi
Ming ZhaoQize GuoHao YuTarik Taleb