Jiahui LiuYuan ZouGuodong DuXudong ZhangJinming Wu
Intelligent connected vehicles (ICVs) face challenges in handling intensive onboard computational tasks due to limited computing capacity. Vehicular edge computing networks (VECNs) offer a promising solution by enabling ICVs to offload tasks to mobile edge computing (MEC), alleviating computational load. As transportation systems are dynamic, vehicular tasks and MEC capacities vary over time, making efficient task offloading and resource allocation crucial. We explored a vehicle–road collaborative edge computing network and formulated the task offloading scheduling and resource allocation problem to minimize the sum of time and energy costs. To address the mixed nature of discrete and continuous decision variables and reduce computational complexity, we propose a hybrid hierarchical deep reinforcement learning (HHDRL) algorithm, structured in two layers. The upper layer of HHDRL enhances the double deep Q-network (DDQN) with a self-attention mechanism to improve feature correlation learning and generates discrete actions (communication decisions), while the lower layer employs deep deterministic policy gradient (DDPG) to produce continuous actions (power control, task offloading, and resource allocation decision). This hybrid design enables efficient decomposition of complex action spaces and improves adaptability in dynamic environments. Results from numerical simulations reveal that HHDRL achieves a significant reduction in total computational cost relative to current benchmark algorithms. Furthermore, the robustness of HHDRL to varying environmental conditions was confirmed by uniformly designing random numbers within a specified range for certain simulation parameters.
Xia YangHaixia ZhangJie TianDongfeng Yuan
Xu ZhaoYichuan WuTianhao ZhaoFeiyu WangMaozhen Li
Elham KarimiYuanzhu ChenBehzad Akbari
Jianbin XueLuyao WangQingda YuPeipei Mao
Renhao ShuaiLeiyu WangShuaishuai GuoHaixia Zhang