Abstract

Proceedings Abstracts of the Twenty-Third International Joint Conference on Artificial Intelligence

Online Expectation Maximization for Reinforcement Learning in POMDPs / 1501
Miao Liu, Xuejun Liao, Lawrence Carin

We present online nested expectation maximization for model-free reinforcement learning in a POMDP. The algorithm evaluates the policy only in the current learning episode, discarding the episode after the evaluation and memorizing the sufficient statistic, from which the policy is computed in closed-form. As a result, the online algorithm has a time complexity O(n) and a memory complexity O(1), compared to O(n2) and O(n) for the corresponding batch-mode algorithm, where $n$ is the number of learning episodes. The online algorithm, which has a provable convergence, is demonstrated on five benchmark POMDP problems.