Dapeng WuSijun WuYaping CuiAiling ZhongTong TangRuyan WangXinqi Lin
Vehicular Edge Computing (VEC) enhances the quality of user services by deploying wealth of resources near vehicles. However, due to highly dynamic and complex nature of vehicular networks, centralized decision-making for resource allocation proves inadequate within VECs. Conversely, allocating resources via distributed decision-making consumes vehicular resources. To improve the quality of user service, we formulate a problem of latency minimization, further subdividing this problem into two subproblems to be solved through distributed decision-making. To mitigate the resource consumption caused by distributed decision-making, we propose Reinforcement Learning (RL) algorithm based on sequential alternating multi-agent system mechanism, which effectively reduces the dimensionality of action space without losing the informational content of action, achieving network lightweighting. We discuss the rationality, generalizability, and inherent advantages of proposed mechanism. Simulation results indicate that our proposed mechanism outperforms traditional RL algorithms in terms of stability, generalizability, and adaptability to scenarios with invalid actions, all while achieving network lightweighting.
Rehmat UllahImran ChowdhuryMd. Abdur RazzaquePalash RoyMohammad Mehedi Hassan
Xia YangHaixia ZhangJie TianDongfeng Yuan
Ghassan A. AbedMohanad A. Al‐Askari
Robert F. RauchZdeněk BečvářPavel MachJuraj Gazda
Muhammad Adi SulistyoDedy Kurnia Setiawan