Qingman ZhangYanzhu GonT. Y. Xing
The task offload problem based on edge computing usually ignores the dependencies between different tasks, causing the decline of task offload performance. Based on the deep reinforcement learning method, the task offloading problem is modeled, and Markov decision making is used to perform fine-grained offloading of computing tasks. Besides, the state and action of the model are redefined with energy consumption and delay as rewards. The attention mechanism is taken to assign weights to the actions of the online network so that it can adaptively deal with various actions in the network. Finally, the algorithm is simulated and verified by DAG with various topologies, and compared with other standard algorithms. The algorithm in this paper is mainly used to improve the offloading efficiency of various edge computing tasks. By comparing with other standard algorithms, when the network transmission rate is 20 Mbps, the latency of the algorithm in this paper is 2.3% lower than the best performing algorithm in the comparison. When the data transmission rate is 8 Mbps and the number of task nodes is 45, the QoS of the proposed algorithm is 6.08% higher than that of the best comparison algorithm, which proves the good performance of the proposed algorithm. Experimental results show that the proposed algorithm has good convergence, low latency and energy consumption, and good stability.
junjie zhouJunli LiuZhongying Chen
Jin WangJia HuGeyong MinWenhan ZhanAlbert Y. ZomayaNektarios Georgalas