Bin LiWancheng XieYinghui YeLei LiuZesong Fei
Integrating unmanned aerial vehicles (UAVs) into vehicular networks have\nshown high potentials in affording intensive computing tasks. In this paper, we\nstudy the digital twin driven vehicular edge computing networks for adaptively\ncomputing resource management where an unmanned aerial vehicle (UAV) named\nFlexEdge acts as a flying server. In particular, we first formulate an energy\nconsumption minimization problem by jointly optimizing UAV trajectory and\ncomputation resource under the practical constraints. To address such a\nchallenging problem, we then build the computation offloading process as a\nMarkov decision process and propose a deep reinforcement learning-based\nproximal policy optimization algorithm to dynamically learn the computation\noffloading strategy and trajectory design policy. Numerical results indicate\nthat our proposed algorithm can achieve quick convergence rate and\nsignificantly reduce the system energy consumption.\n
Chaogang TangHuaming WuChunsheng ZhuShuo Xiao
Rui WangYong CaoAdeeb NoorThamer A AlamoudiRedhwan Nour
Hengwei LiuNi TianDeng-Ao SongLong Zhang
Xingxia DaiZhu XiaoHongbo JiangJohn C. S. Lui