Jiaqi WuLin HuangHuaize LiuLin Gao
Mobile edge computing (MEC) is a promising approach to reduce the network traffic load and alleviate the back-haul congestion by pushing computation down to the network edge (e.g., base stations) that are close to the origin of data. However, when many mobile devices (MDs) offload tasks to a base station (BS) in a dynamic and stochastic environment (e.g., with time-varying wireless channels and uncertain task models), it is often challenging for MDs to make offloading decisions in decentralized manner. In this work, we consider a collaborative MEC scenario, where an MD can offload its task to the associated BS or to other BSs through the associated BS. In such a scenario, we study the joint computation offloading and resource allocation problem, aiming at minimizing the expected long-term delay, taking the energy consumption constraint into consideration. The problem is challenging due to time-varying system and distributed decisions. To solve the problem in an online and decentralized manner, we propose a deep reinforcement learning (DRL) based distributed online algorithm. By incorporating the double deep Q network and dueling deep Q network technique, the proposed algorithm can improve the performance of the whole system significantly. Simulation results show that the proposed DRL-based algorithm outperforms baseline methods and can reduce the average delay of tasks by 76.4%-91.2%.
Jienan ChenSiyu ChenQi WangBin CaoGang FengJianhao Hu
Muhammad EjazGuowei WuAbid SultanTahir Iqbal
Sangwon HwangJuseong ParkHoon LeeMintae KimInkyu Lee
Subrat Prasad PandaAnsuman BanerjeeArani Bhattacharya