On the (In)Tractability of Reinforcement Learning for LTL Objectives

On the (In)Tractability of Reinforcement Learning for LTL Objectives

Cambridge Yang, Michael L. Littman, Michael Carbin

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3650-3658. https://doi.org/10.24963/ijcai.2022/507

In recent years, researchers have made significant progress in devising reinforcement-learning algorithms for optimizing linear temporal logic (LTL) objectives and LTL-like objectives. Despite these advancements, there are fundamental limitations to how well this problem can be solved. Previous studies have alluded to this fact but have not examined it in depth. In this paper, we address the tractability of reinforcement learning for general LTL objectives from a theoretical perspective. We formalize the problem under the probably approximately correct learning in Markov decision processes (PAC-MDP) framework, a standard framework for measuring sample complexity in reinforcement learning. In this formalization, we prove that the optimal policy for any LTL formula is PAC-MDP-learnable if and only if the formula is in the most limited class in the LTL hierarchy, consisting of formulas that are decidable within a finite horizon. Practically, our result implies that it is impossible for a reinforcement-learning algorithm to obtain a PAC-MDP guarantee on the performance of its learned policy after finitely many interactions with an unconstrained environment for LTL objectives that are not decidable within a finite horizon.
Keywords:
Machine Learning: Reinforcement Learning