In this paper, we consider an Internet of Vehicles (IoV) network, whereby unmanned aerial vehicles (UAVs) assist mobile edge computing (MEC) servers of the roadside units (RSUs) to provide ubiquitous connectivity to the vehicles. A virtual representation of the IoV network is established in the aerial network as a digital twin (DT) which captures the dynamics of the entities of the physical network in real-time in order to perform efficient resource allocation. For this, we investigate an intelligent delay-sensitive task offloading scheme for the dynamic vehicular environment which provides computation resources via local execution, vehicle-to-vehicle (V2V), and vehicle-to-infrastructure/RSU (V2I) offloading modes based on the energy consumption of the system. We propose a deep reinforcement learning (DRL)-based resource allocation scheme in the DT (RADiT) of the IoV network for maximizing its utility while optimizing the strategy of task offloading. We compare the performance of the proposed algorithm with and without the presence of V2V computation mode and another benchmark, DRL algorithm called soft actor-critic (SAC). Finally, simulations are performed to demonstrate the efficacy of the proposed RADiT algorithm.
Bishmita HazarikaKeshav SinghChih–Peng LiAnke SchmeinkKim Fung Tsang
Bishmita HazarikaKeshav SinghAnal PaulTrung Q. Duong
Xing WangChao HeWenhui JiangWanting WangLeida LiXin Xie
Peng WangNing XuWen SunGaozu WangYan Zhang
Lun TangYanzhou YiSonglin LiQiong HuangQianbin Chen