Reducing Bus Bunching with Asynchronous Multi-Agent Reinforcement Learning
Reducing Bus Bunching with Asynchronous Multi-Agent Reinforcement Learning
Jiawei Wang, Lijun Sun
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 426-433.
https://doi.org/10.24963/ijcai.2021/60
The bus system is a critical component of sustainable urban transportation. However, due to the significant uncertainties in passenger demand and traffic conditions, bus operation is unstable in nature and bus bunching has become a common phenomenon that undermines the reliability and efficiency of bus services. Despite recent advances in multi-agent reinforcement learning (MARL) on traffic control, little research has focused on bus fleet control due to the tricky asynchronous characteristic---control actions only happen when a bus arrives at a bus stop and thus agents do not act simultaneously. In this study, we formulate route-level bus fleet control as an asynchronous multi-agent reinforcement learning (ASMR) problem and extend the classical actor-critic architecture to handle the asynchronous issue. Specifically, we design a novel critic network to effectively approximate the marginal contribution for other agents, in which graph attention neural network is used to conduct inductive learning for policy evaluation. The critic structure also helps the ego agent optimize its policy more efficiently. We evaluate the proposed framework on real-world bus services and actual passenger demand derived from smart card data. Our results show that the proposed model outperforms both traditional headway-based control methods and existing MARL methods.
Keywords:
Agent-based and Multi-agent Systems: Multi-agent Learning
Multidisciplinary Topics and Applications: Transportation
Machine Learning Applications: Applications of Reinforcement Learning