
Research Article
Episodes-Based Traffic Signal Control: A Deep Reinforcement Learning Approach With Fluid-Dynamic Simulation
@INPROCEEDINGS{10.4108/eai.28-4-2025.2357827, author={Amritha G Prasad and Jeyavardhini S and Sonal Panda and Hashveen S P and T. Grace Shalini}, title={Episodes-Based Traffic Signal Control: A Deep Reinforcement Learning Approach With Fluid-Dynamic Simulation}, proceedings={Proceedings of the 4th International Conference on Information Technology, Civil Innovation, Science, and Management, ICITSM 2025, 28-29 April 2025, Tiruchengode, Tamil Nadu, India, Part I}, publisher={EAI}, proceedings_a={ICITSM PART I}, year={2025}, month={10}, keywords={traffic congestion reinforcement learning deep-q network fluid based simulation taichi}, doi={10.4108/eai.28-4-2025.2357827} }
- Amritha G Prasad
Jeyavardhini S
Sonal Panda
Hashveen S P
T. Grace Shalini
Year: 2025
Episodes-Based Traffic Signal Control: A Deep Reinforcement Learning Approach With Fluid-Dynamic Simulation
ICITSM PART I
EAI
DOI: 10.4108/eai.28-4-2025.2357827
Abstract
Traffic congestion remains a critical issue in urban traffic networks, leading to increased fuel usage, emissions, and frustration among commuters. This project suggests an AI- driven traffic optimization using the integration of Reinforcement Learning (RL) and fluid-based traffic flow simulation. Taking the Deep Q-Network (DQN) algorithm, the system trains an intelligent agent to learn dynamically adapting traffic light status towards minimizing road congestion. The road is modeled as a grid one-dimensional space in which the car flow is simulated by Taichi, an efficient computer library that simulates cars as a fluid to compute densities accurately. Initial traffic is imported from a density database, which represents car distribution on segments. For every time step, the agent observes significant features such as average, max, and min traffic densities, selects traffic light actions, and receives feedback according to the congestion status of the system. The simulation displays the instantaneous impact of changing lights, and the agent learns to avoid traffic congestion through continuous interaction. The final outcomes are visualized by heatmaps and animations to depict flow pattern improvement. This project describes how AI may control traffic systems autonomously and present a workable, scaleable solution for real-world city mobility problems.