This project presents an adaptive traffic signal control framework using Deep Q-Learning (DQL) to optimize lane-by-lane clearance times at urban intersections. Unlike traditional fixed-phase systems, our agent predicts continuous green times for each lane, enabling fine-grained, real-time adjustments.
The system is trained in a custom-built simulation that incorporates:
- Traffic density and vehicle types
- Weather conditions
- Road conditions
- Time-of-day variations
- Random traffic incidents
- Continuous Action Space: Predicts optimal clearance time in seconds.
- Dynamic Environment: Realistic simulation with multiple traffic and environmental factors.
- Deep Neural Network Agent: Learns complex mappings from traffic states to optimal timings.
- Experience Replay: Stabilizes learning by randomizing past experiences.
- Custom Reward Function: Encourages precise time predictions.
- State Space: 9 traffic and environmental features (vehicle counts, lanes, weather, road quality, etc.)
- Optimal Time Calculation: Deterministic formula considering traffic volume and conditions.
- Model Architecture: Two hidden layers (64 & 128 neurons, ReLU activation) with a linear output layer.
- Training Process: Experience Replay and guided exploration strategy.
- Learned strategies closely approximate theoretical optima.
- Improved traffic throughput and reduced delays in simulation.
- Potential for deployment in Intelligent Transportation Systems (ITS).
- Refined exploration strategies.
- Hyperparameter and reward tuning.
- Multi-agent coordination for network-wide optimization.
- Zenodo record: https://doi.org/10.5281/zenodo.16837904
- Kaggle Notebook (code & data): Deep Q-Learning for AI Traffic Management System
If you use this work, please cite:
D. Yousuf, “Adaptive Traffic Signal Control Using Deep Q-Learning for Optimizing Traffic Clearance Times”. Zenodo, Aug. 13, 2025. doi: 10.5281/zenodo.16837904.