About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Proceedings of the 4th International Conference on Information Technology, Civil Innovation, Science, and Management, ICITSM 2025, 28-29 April 2025, Tiruchengode, Tamil Nadu, India, Part II

Research Article

Adaptive Traffic Signal Control using Deep-Q Network for Enhanced Traffic Flow Efficiency

Download9 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.4108/eai.28-4-2025.2358043,
        author={P  Koushik Reddy and R.  Lotus and G.  Mariammal},
        title={Adaptive Traffic Signal Control using Deep-Q Network for Enhanced Traffic Flow Efficiency},
        proceedings={Proceedings of the 4th International Conference on Information Technology, Civil Innovation, Science, and Management, ICITSM 2025, 28-29 April 2025, Tiruchengode, Tamil Nadu, India, Part II},
        publisher={EAI},
        proceedings_a={ICITSM PART II},
        year={2025},
        month={10},
        keywords={adaptive traffic signal control deep reinforcement learning congestion management real-time decision making emissions reduction},
        doi={10.4108/eai.28-4-2025.2358043}
    }
    
  • P Koushik Reddy
    R. Lotus
    G. Mariammal
    Year: 2025
    Adaptive Traffic Signal Control using Deep-Q Network for Enhanced Traffic Flow Efficiency
    ICITSM PART II
    EAI
    DOI: 10.4108/eai.28-4-2025.2358043
P Koushik Reddy1,*, R. Lotus1, G. Mariammal1
  • 1: Vel Tech Rangarajan Dr. Sagunthala R and D Institute of Science and Technology
*Contact email: vtu19795@veltech.edu.in

Abstract

Traffic congestion remains a critical challenge in urban transportation, emphasizing the need for intelligent traffic control systems. Traditional control strategies, such as fixed-time and adaptive approaches, often fail to consider fluctuations in traffic flow, leading to inefficiencies as conditions worsen. This paper introduces a novel traffic signal control framework based on Deep Q-Networks (DQN), leveraging reinforcement learning to optimize signal timing dynamically. The agent interacts with its environment, incorporating real-time traffic flow, vehicle waiting times, and phase transitions to adaptively tune signals, thereby reducing congestion and improving throughput. Experiments conducted using the Simulation of Urban Mobility (SUMO) platform demonstrate significant improvements over conventional methods, reducing average waiting time to 42.5 seconds and achieving traffic efficiency of 82.2%. The scalability and flexibility of this system make it a promising solution for traffic management in densely populated cities, supporting real-time decision-making and sustainable urban mobility.

Keywords
adaptive traffic signal control, deep reinforcement learning, congestion management, real-time decision making, emissions reduction
Published
2025-10-14
Publisher
EAI
http://dx.doi.org/10.4108/eai.28-4-2025.2358043
Copyright © 2025–2025 EAI
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL