Influence of State-Variable Constraints on Partially Observable Monte Carlo Planning

Influence of State-Variable Constraints on Partially Observable Monte Carlo Planning

Alberto Castellini, Georgios Chalkiadakis, Alessandro Farinelli

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 5540-5546. https://doi.org/10.24963/ijcai.2019/769

Online planning methods for partially observable Markov decision processes (POMDPs) have recently gained much interest. In this paper, we propose the introduction of prior knowledge in the form of (probabilistic) relationships among discrete state-variables, for online planning based on the well-known POMCP algorithm. In particular, we propose the use of hard constraint networks and probabilistic Markov random fields to formalize state-variable constraints and we extend the POMCP algorithm to take advantage of these constraints. Results on a case study based on Rocksample show that the usage of this knowledge provides significant improvements to the performance of the algorithm. The extent of this improvement depends on the amount of knowledge encoded in the constraints and reaches the 50% of the average discounted return in the most favorable cases that we analyzed.
Keywords:
Planning and Scheduling: POMDPs
Planning and Scheduling: Robot Planning