In the era of Internet of Things (loT), data offloading become a promising and crucial strategy to improve the overall system performance and also to provide quality-of-service (QOS). In this context, recently fog computing has gained a lot of interests from the industry as well as academia. In this paper, we propose a delay-aware task offloading strategy in mobile fog-based network. We consider several moving vehicles in a one-way road out of which some vehicles act as client vehicles and some of them act as mobile fog nodes. Individual fog nodes allocate its available resources to the the requesting client vehicles in its proximity. However, because of the dynamic nature of the vehicular environment, it is difficult to develop a scheme that can decide how to allocate the computing resources to the local on-board CPU or to the neighbouring fog nodes. In this regards, the paper propose a deep reinforcement learning based intelligent task offloading for vehicles in motion (ITOVM) policy, considering the vehicle mobility and communication bandwidth constraints, to minimize the overall latency of the network. The proposed IOTVM policy is formulated as the Markov decision process (MDP) which is solved by the concept of deep Q network (DQN). Finally, extensive simulation results demonstrate the efficacy and performance enhancement of the proposed approach compared to several baseline algorithms.
Xia YangHaixia ZhangXiaotian ZhouDongfeng Yuan
Hongzhi GuoJiajia LiuJu RenYanning Zhang
Ahmed S. AlfakeehMuhammad Awais Javed
Ahmad Naseem AlviMuhammad Awais JavedMozaherul Hoque Abul HasanatMuhammad Badruddin KhanAbdul Khader Jilani SaudagarMohammed AlkhathamiUmar Farooq
Yufei LiuZhao, LiangKaiqi Yang