Reinforcement Learning Approaches for Traffic Signal Control under Missing Data

Reinforcement Learning Approaches for Traffic Signal Control under Missing Data

Hao Mei, Junxian Li, Bin Shi, Hua Wei

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 2261-2269. https://doi.org/10.24963/ijcai.2023/251

The emergence of reinforcement learning (RL) methods in traffic signal control (TSC) tasks has achieved promising results. Most RL approaches require the observation of the environment for the agent to decide which action is optimal for a long-term reward. However, in real-world urban scenarios, missing observation of traffic states may frequently occur due to the lack of sensors, which makes existing RL methods inapplicable on road networks with missing observation. In this work, we aim to control the traffic signals in a real-world setting, where some of the intersections in the road network are not installed with sensors and thus with no direct observations around them. To the best of our knowledge, we are the first to use RL methods to tackle the TSC problem in this real-world setting. Specifically, we propose two solutions: 1) imputes the traffic states to enable adaptive control. 2) imputes both states and rewards to enable adaptive control and the training of RL agents. Through extensive experiments on both synthetic and real-world road network traffic, we reveal that our method outperforms conventional approaches and performs consistently with different missing rates. We also investigate how missing data influences the performance of our model.
Keywords:
Data Mining: DM: Mining spatial and/or temporal data
Agent-based and Multi-agent Systems: MAS: Multi-agent learning
Machine Learning: ML: Reinforcement learning