JOURNAL ARTICLE

Adaptive traffic signal control using deep reinforcement learning for network traffic incidents

Li, Tianxin

Year: 2023 Journal:   Texas Digital Library (University of Texas)   Publisher: The University of Texas at Austin

Abstract

Traffic signal control is an essential aspect of urban mobility that significantly impacts the efficiency and safety of transportation networks. Traditional traffic signal control systems rely on fixed-time or actuated signal timings, which may not adapt to the dynamic traffic demands and congestion patterns. Therefore, researchers and practitioners have increasingly turned to reinforcement learning (RL) techniques as a promising approach to improve the performance of traffic signal control. This dissertation investigates the application of RL algorithms to traffic signal control, aiming to optimize traffic flow and reduce congestion. The study develops a simulation model of a signalized intersection and trains RL agents to learn how to adjust signal timings based on real-time traffic conditions. The RL agents are designed to learn from experience and adapt to changing traffic patterns, thereby improving the efficiency of traffic flow, even for scenarios in which traffic incidents occur in the network. In this dissertation, the potential benefits of using RL algorithms to optimize traffic signal control in scenarios with and without traffic incidents were explored. To achieve this, an incident generation module was developed using the open-source traffic signal performance simulation framework that relies on the SUMO software. This module includes emergency response vehicles to mimic the realistic impact of traffic incidents and generates incidents randomly in the network. By exposing the RL agent to this environment, it can learn from the experience and optimize traffic signal control to reduce system delay. The study began with a single intersection scenario, where the DQN algorithm was modeled to form the RL agent traffic signal controller. To improve the training process and model performance, experience replay and target network were implemented to solve the limitations of DQN. Hyperparameter tuning was conducted to find the best parameter combination for the training process, and the results showed that DQN outperformed other controllers in terms of the system-wise and intersection-wise queue distribution and vehicle delay. The study was then extended to a small corridor with 2 intersections and a grid network (2x2 intersection), and the incident generation module was used to expose the RL agent to different traffic scenarios. Again, hyperparameter tuning was conducted, and the DQN model outperformed other controllers in terms of reducing congestion and improving the system performance. The robustness of the DQN performance was also tested with different demands, and the microsimulation results showed that the DQN performance was consistent. Overall, this study highlights the potential of RL algorithms to optimize traffic signal control in scenarios with and without traffic incidents. The incident generation module developed in this study provides a realistic environment for the RL agent to learn and adapt, leading to improved system performance and reduced congestion. In addition, hyperparameter tuning is essential to lay down a solid foundation for the RL training process.

Keywords:
Reinforcement learning Intersection (aeronautics) SIGNAL (programming language) Traffic flow (computer networking) Train Traffic generation model Traffic simulation Traffic congestion Traffic optimization

Metrics

1
Cited By
0.31
FWCI (Field Weighted Citation Impact)
0
Refs
0.66
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Microbial Natural Products and Biosynthesis
Health Sciences →  Medicine →  Pharmacology
Plant Disease Resistance and Genetics
Life Sciences →  Agricultural and Biological Sciences →  Plant Science
Synthesis and Biological Activity
Life Sciences →  Biochemistry, Genetics and Molecular Biology →  Cancer Research
© 2026 ScienceGate Book Chapters — All rights reserved.