By introducing the processing and resources storage to the network edge, edge computing is a new, promising computing paradigm that drastically decreases network traffic and service latency. Many edge computing applications consist of interdependent tasks, wherein the results of one task are the inputs of another. The important and difficult problem of where to position each running task to optimise Quality-of-Service (QoS) is how to offload these tasks to edge of network. In this work, implemented a novel Deep Reinforcement Learning based Task Offloading (DRLTO) method utilized as the intelligent task offloading that uses a Directed Acyclic Graph (DAG) to represent the dependent tasks and off-policy reinforcement learning powered by a Sequence-to-Sequence (S2S) neural network. This research outcomes show that the DRLTO achieved less cloud processing time, and number of single terminal tasks, and percentage of failed task when compared to Deep reinforcement learning-based cloud-edge collaborative mobile computation offloading (DRL-CCMCO) and DeepEdge.
Guanjin QuHuaming WuRuidong LiPengfei Jiao
Dong TianYuliang WangY. DangLigang Ren
Qingman ZhangYanzhu GonT. Y. Xing
Shiyao LiuZongshuai ZhangNa WangWenhao ZouLin TianWeiyuan Li