Thinh Quang DinhQuang Duy LãTony Q. S. QuekHyundong Shin
Mobile edge computing (MEC) is expected to provide cloud-like capacities for mobile users (MUs) at the edge of wireless networks. However, deploying MEC systems faces many challenges, one of which is to achieve an efficient distributed offloading mechanism for multiple users in time-varying wireless environments. In this paper, we study a multi-user multi-edge-node computation offloading problem. Since edge nodes' communication and computing capacities are limited which leads resource contention when many MUs offload to the same edge node at the same time, we formulate this problem as a non-cooperative exact potential game (EPG), where each MU, in each time slot, selfishly maximizes its number of processed central processor unit (CPU) cycles and reduces its energy consumption. Assuming that channel information is static and available to MUs, we show that MUs could achieve a Nash equilibrium via a best response-based offloading mechanism. Next, we extend the problem to a practical scenario, where the number of processed CPU cycles is time-varying and unknown to MUs because of the uncertain channel information. In this case, we adopt an unknown payoff game framework and prove that the EPG properties still hold. Then, we propose a model-free reinforcement learning offloading mechanism which helps MUs learn their long-term offloading strategies to maximize their long-term utilities. Numerical results illustrate that our proposed algorithm for unknown CSI outperforms other schemes, such as local processing and random assignment, and achieves up to 87.87% average long-term payoffs compared to the perfect CSI case.
Kai PengYiwen ZhangXiaofei WangXiaolong XuXiuhua LiVictor C. M. Leung
Kai PengYiwen ZhangXiaofei WangXiaolong XuXiuhua LiVictor C. M. Leung
Jinfang ShengJie HuXiaoyu TengBin WangPan Xiao-xia
Yuqing LiXiong WangXiaoying GanHaiming JinLuoyi FuXinbing Wang
Cheng ZhongShaoyong GuoPengcheng LuSujie Shao