Factorized Asymptotic Bayesian Policy Search for POMDPs

Factorized Asymptotic Bayesian Policy Search for POMDPs

Masaaki Imaizumi, Ryohei Fujimaki

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 4346-4352. https://doi.org/10.24963/ijcai.2017/607

This paper proposes a novel direct policy search (DPS) method with model selection for partially observed Markov decision processes (POMDPs). DPSs have been standard for learning POMDPs due to their computational efficiency and natural ability to maximize total rewards. An important open challenge for the best use of DPS methods is model selection, i.e., determination of the proper dimensionality of hidden states and complexity of policy functions, to mitigate overfitting in highly-flexible model representations of POMDPs. This paper bridges Bayesian inference and reward maximization and derives marginalized weighted log-likelihood~(MWL) for POMDPs which takes both advantages of Bayesian model selection and DPS. Then we propose factorized asymptotic Bayesian policy search (FABPS) to explore the model and the policy which maximizes MWL by expanding recently-developed factorized asymptotic Bayesian inference. Experimental results show that FABPS outperforms state-of-the-art model selection methods for POMDPs, with respect both to model selection and to expected total rewards.
Keywords:
Planning and Scheduling: POMDPs
Agent-based and Multi-agent Systems: Agent-Based Simulation and Emergence