Abstract

Proceedings Abstracts of the Twenty-Fifth International Joint Conference on Artificial Intelligence

Markovian State and Action Abstractions for MDPs via Hierarchical MCTS / 3029
Aijun Bai, Siddharth Srivastava, Stuart Russell

State abstraction is an important technique for scaling MD Palgorithms. As is well known, however, it introduces difficulties due to the non-Markovian nature of state-abstracted models. Whereas prior approaches rely upon ad hoc fixes for this issue, we propose instead to view the state-abstracted model as a POMDP and show that we can thereby take advantage of state abstraction without sacrificing the Markov property. We further exploit the hierarchical structure introduced by state abstraction by extending the theory of options to a POMDP setting. In this context we propose a hierarchical Monte Carlo tree search algorithm and show that it converges to a recursively optimal hierarchical policy. Both theoretical and empirical results suggest that abstracting an MDP into a POMDP yields a scalable solution approach.

PDF