Bishmita HazarikaKeshav SinghSudip BiswasShahid MumtazChih–Peng Li
This paper considers an internet of vehicles (IoV) network, where multi-access edge computing (MAEC) servers are deployed at base stations (BSs) aided by multiple reconfigurable intelligent surfaces (RISs) for both uplink and downlink transmission. An intelligent task offloading methodology is designed to optimize the resource allocation scheme in the vehicular network which is based on the state of criticality of the network and the priority and size of tasks. We then develop a multi-agent deep reinforcement learning (MA-DRL) framework using the Markov game for optimizing the task offloading decision strategy. The proposed algorithm maximizes the mean utility of the IoV network and improves communication quality. Extensive numerical results were performed that demonstrate that the RIS-assisted IoV network using the proposed MA-DRL algorithm achieves higher utility than current state-of-the art networks (not aided by RISs) and other baseline DRL algorithms, namely soft actor-critic (SAC), deep deterministic policy gradient (DDPG), twin delayed DDPG (TD3). The proposed method improves the offloading data rate of the tasks, reduces the mean delay and ensures that a higher percentage of offloaded tasks are completed compared to that of other DRL-based and non-RIS-assisted IoV frameworks
Bishmita HazarikaKeshav SinghChih–Peng LiSudip Biswas
Yunlong CaiChongwu DongCheng QiaoWushao Wen
Keshav SinghHasan HasanSandeep Kumar SinghCunhua PanSudip Biswas
Yangjing WangXiao LiNing GaoShi Jin