The rapid expansion of wireless services designed for local area communication has undergone the necessity for effective resource allocation within cellular vehicle-to-everything (C-V2X) networks. These networks, which operate using fifth-generation (5G) technology, bring about improved system performance by facilitating peer-to-peer communication and resource sharing among nearby devices and mobile users. This paper presents an innovative strategy known as Dynamic Resource Reservation with Deep Reinforcement Learning (DR2-DRL), which employs a multi-agent deep reinforcement learning frame-work to tackle the challenge of packet collisions in vehicular networks. The primary objective of DR2-DRL is to empower vehicles in making intelligent choices while opting for radio resources. To enhance the efficiency of training, an optimized attention-based mechanism is integrated, enabling vehicles to selectively concentrate on pertinent information obtained from observations and actions of neighboring vehicles. Remarkably, this algorithm is particularly well-suited for C-V2X, as it enables independent resource selection without relying on global information. Extensive simulations illustrate that DR2-DRL outperforms alternative decentralized approaches, underscoring its ability to scale and remain robust within dynamic vehicular networks.
Mohammad ParviniMohammad Reza JavanNader MokariBijan AbbasiEduard A. Jorswieck
Yuxiang ZhengLong D. NguyenTrung Q. Duong
Yuanfeng DingYan HuangLi TangXizhong QinZhenhong Jia