Abstract

Proceedings Abstracts of the Twenty-Fifth International Joint Conference on Artificial Intelligence

Bayesian Reinforcement Learning with Behavioral Feedback / 1571
Teakgyu Hong, Jongmin Lee, Kee-Eung Kim, Pedro A. Ortega, Daniel Lee

In the standard reinforcement learning setting, the agent learns optimal policy solely from state transitions and rewards from the environment. We consider an extended setting where a trainer additionally provides feedback on the actions executed by the agent. This requires appropriately incorporating the feedback, even when the feedback is not necessarily accurate. In this paper, we present a Bayesian approach to this extended reinforcement learning setting. Specifically, we extend Kalman Temporal Difference learning to compute the posterior distribution over Q-values given the state transitions and rewards from the environment as well as the feedback from the trainer. Through experiments on standard reinforcement learning tasks, we show that learning performance can be significantly improved even with inaccurate feedback.

PDF