Abstract

Nowadays, one of the biggest issues in urban areas is traffic. This problem wastes important time and contributes to air and sound pollution. This affects people's general quality of life in addition to posing health dangers. Our study attempts to mitigate these problems, effectively cutting down on wait times and delays. Our Reinforcement Learning method creates intelligent agents that can adjust traffic lights at crossings instantly. Our objective is to minimize delays, reduce congestion, reduce travel times, improve safety, and improve traffic flow. We implemented the Deep Q Learning algorithm which activities yield the greatest benefits under various traffic scenarios. Our model can the sequence time since the Green signal (GS) lasts 10 seconds and the Red signal (RS) lasts 5 seconds. The waiting period is shortened by 50% as a result. This study suggests reinforcement learning may improve traffic signal controller synchronization and urban traffic congestion. This novel method may improve transport efficiency and sustainability.

Keywords:
Reinforcement learning Computer science Control (management) Reinforcement Artificial intelligence Engineering Structural engineering

Metrics

11
Cited By
7.00
FWCI (Field Weighted Citation Impact)
31
Refs
0.95
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Traffic control and management
Physical Sciences →  Engineering →  Control and Systems Engineering

Related Documents

JOURNAL ARTICLE

Traffic light control with reinforcement learning

Taoyu Pan

Journal:   Applied and Computational Engineering Year: 2024 Vol: 43 (1)Pages: 26-43
JOURNAL ARTICLE

Adaptive traffic light control using deep reinforcement learning technique

Ritesh KumarNistala Venkata Kameshwer SharmaVijay Kumar Chaurasiya

Journal:   Multimedia Tools and Applications Year: 2023 Vol: 83 (5)Pages: 13851-13872
BOOK-CHAPTER

Traffic Light Control Using RFID and Deep Reinforcement Learning

Shivnath YadavSunakshi SinghVijay Kumar Chaurasiya

Studies in computational intelligence Year: 2022 Pages: 47-64
© 2026 ScienceGate Book Chapters — All rights reserved.