Semi-Markov Reinforcement Learning for Stochastic Resource Collection

Semi-Markov Reinforcement Learning for Stochastic Resource Collection

Sebastian Schmoll, Matthias Schubert

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 3349-3355. https://doi.org/10.24963/ijcai.2020/463

We show that the task of collecting stochastic, spatially distributed resources (Stochastic Resource Collection, SRC) may be considered as a Semi-Markov-Decision-Process. Our Deep-Q-Network (DQN) based approach uses a novel scalable and transferable artificial neural network architecture. The concrete use-case of the SRC is an officer (single agent) trying to maximize the amount of fined parking violations in his area. We evaluate our approach on a environment based on the real-world parking data of the city of Melbourne. In small, hence simple, settings with short distances between resources and few simultaneous violations, our approach is comparable to previous work. When the size of the network grows (and hence the amount of resources) our solution significantly outperforms preceding methods. Moreover, applying a trained agent to a non-overlapping new area outperforms existing approaches.
Keywords:
Machine Learning Applications: Applications of Reinforcement Learning
Multidisciplinary Topics and Applications: Transportation
Machine Learning: Deep Reinforcement Learning
Planning and Scheduling: Markov Decisions Processes