In the transportation industry, task offloading services of edge intelligent Internet of Vehicles (IoV) are expected to provide vehicles with the better Quality of Experience (QoE). However, the various status of diverse edge servers and vehicles, as well as varying vehicular offloading modes, make a challenge of task offloading service. Therefore, to enhance the satisfaction of QoE, we first introduce a novel QoE model. Specifically, the emerging QoE model restricted by the energy consumption, (1) intelligent vehicles equipped with caching spaces and computing units may work as carriers; (2) various computational and caching capacities of edge servers can empower the offloading; (3) unpredictable routings of the vehicles and edge servers can lead to diverse information transmission. We then propose an improved deep reinforcement learning (DRL) algorithm named RA-DDPG with the prioritized experience replay (PER) and the stochastic weight averaging (SWA) mechanisms based on deep deterministic policy gradients (DDPG) to seek an optimal offloading mode, saving energy consumption. Extensive experiments certify the better performance, i.e., stability and convergence, of our RA-DDPG algorithm compared to existing work. Moreover, the experiments indicate that the QoE value can be improved by the proposed algorithm.
Zhiwen ZhouYingbo WuJiaxin Hou
Xiaoming HeHaodong LuHuawei HuangYingchi MaoKun WangSong Guo
Xiaoming HeHaodong LuMiao DuYingchi MaoKun Wang
Haodong LuXiaoming HeMiao DuXiukai RuanYanfei SunKun Wang