LTL and Beyond: Formal Languages for Reward Function Specification in Reinforcement Learning

LTL and Beyond: Formal Languages for Reward Function Specification in Reinforcement Learning

Alberto Camacho, Rodrigo Toro Icarte, Toryn Q. Klassen, Richard Valenzano, Sheila A. McIlraith

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Understanding Intelligence and Human-level AI in the New Machine Learning era. Pages 6065-6073. https://doi.org/10.24963/ijcai.2019/840

In Reinforcement Learning (RL), an agent is guided by the rewards it receives from the reward function. Unfortunately, it may take many interactions with the environment to learn from sparse rewards, and it can be challenging to specify reward functions that reflect complex reward-worthy behavior. We propose using reward machines (RMs), which are automata-based representations that expose reward function structure, as a normal form representation for reward functions. We show how specifications of reward in various formal languages, including LTL and other regular languages, can be automatically translated into RMs, easing the burden of complex reward function specification. We then show how the exposed structure of the reward function can be exploited by tailored q-learning algorithms and automated reward shaping techniques in order to improve the sample efficiency of reinforcement learning methods. Experiments show that these RM-tailored techniques significantly outperform state-of-the-art (deep) RL algorithms, solving problems that otherwise cannot reasonably be solved by existing approaches.
Keywords:
Special Track on Understanding Intelligence and Human-level AI in the New Machine Learning era: Knowledge representations for Learning (Special Track on Human AI and Machine Learning)