JOURNAL ARTICLE

Deep Reinforcement Learning based Congestion Control for V2X Communication

Abstract

In release 14 (Rel-14) Long Term Evolution (LTE), the 3rd generation partnership project (3GPP) standard has introduced Cellular Vehicle to Everything (C-V2X) communication to pave the way for future intelligent transport systems (ITS). C-V2X communication envisions supporting a diverse range of use cases with varying quality of service (QoS) requirements. For example, cooperative collision avoidance re-quires stringent reliability, while infotainment use cases require a high data throughput. C-V2X communication remains susceptible to performance degradation due to network congestion. This paper presents a centralized congestion control scheme for C-V2X communication based on the Deep Reinforcement Learning (DRL) framework. A performance evaluation of the algorithm is conducted based on system-level simulation based on TAPASCologne scenario in the Simulation of Urban Mobility (SUMO) platform. The results show the effectiveness of a DRL-based approach to achieve the packet reception ratio (PRR) as per the packet's associated QoS while maintaining the average measured Channel Busy Ratio (CBR) below 0.65.

Keywords:
Reinforcement learning Computer science Network congestion Control (management) Artificial intelligence Computer network

Metrics

21
Cited By
2.65
FWCI (Field Weighted Citation Impact)
18
Refs
0.89
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Software-Defined Networks and 5G
Physical Sciences →  Computer Science →  Computer Networks and Communications
Opportunistic and Delay-Tolerant Networks
Physical Sciences →  Computer Science →  Computer Networks and Communications
IoT and Edge/Fog Computing
Physical Sciences →  Computer Science →  Computer Networks and Communications
© 2026 ScienceGate Book Chapters — All rights reserved.