The Expected-Length Model of Options

The Expected-Length Model of Options

David Abel, John Winder, Marie desJardins, Michael Littman

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 1951-1958. https://doi.org/10.24963/ijcai.2019/270

Effective options can make reinforcement learning easier by enhancing an agent's ability to both explore in a targeted manner and plan further into the future. However, learning an appropriate model of an option's dynamics in hard, requiring estimating a highly parameterized probability distribution. This paper introduces and motivates the Expected-Length Model (ELM) for options, an alternate model for transition dynamics. We prove ELM is a (biased) estimator of the traditional Multi-Time Model (MTM), but provide a non-vacuous bound on their deviation. We further prove that, in stochastic shortest path problems, ELM induces a value function that is sufficiently similar to the one induced by MTM, and is thus capable of supporting near-optimal behavior. We explore the practical utility of this option model experimentally, finding consistent support for the thesis that ELM is a suitable replacement for MTM. In some cases, we find ELM leads to more sample efficient learning, especially when options are arranged in a hierarchy.
Keywords:
Machine Learning: Reinforcement Learning
Planning and Scheduling: Hierarchical planning
Planning and Scheduling: Markov Decisions Processes
Planning and Scheduling: Model-Based Reasoning