A Strongly Asymptotically Optimal Agent in General Environments

A Strongly Asymptotically Optimal Agent in General Environments

Michael K. Cohen, Elliot Catt, Marcus Hutter

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2179-2186. https://doi.org/10.24963/ijcai.2019/302

Reinforcement Learning agents are expected to eventually perform well. Typically, this takes the form of a guarantee about the asymptotic behavior of an algorithm given some assumptions about the environment. We present an algorithm for a policy whose value approaches the optimal value with probability 1 in all computable probabilistic environments, provided the agent has a bounded horizon. This is known as strong asymptotic optimality, and it was previously unknown whether it was possible for a policy to be strongly asymptotically optimal in the class of all computable probabilistic environments. Our agent, Inquisitive Reinforcement Learner (Inq), is more likely to explore the more it expects an exploratory action to reduce its uncertainty about which environment it is in, hence the term inquisitive. Exploring inquisitively is a strategy that can be applied generally; for more manageable environment classes, inquisitiveness is tractable. We conducted experiments in "grid-worlds" to compare the Inquisitive Reinforcement Learner to other weakly asymptotically optimal agents.
Keywords:
Machine Learning: Reinforcement Learning
Planning and Scheduling: Model-Based Reasoning
Uncertainty in AI: Sequential Decision Making
Uncertainty in AI: Exact Probabilistic Inference