Constrained Policy Improvement for Efficient Reinforcement Learning

Constrained Policy Improvement for Efficient Reinforcement Learning

Elad Sarafian, Aviv Tamar, Sarit Kraus

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 2863-2871. https://doi.org/10.24963/ijcai.2020/396

We propose a policy improvement algorithm for Reinforcement Learning (RL) termed Rerouted Behavior Improvement (RBI). RBI is designed to take into account the evaluation errors of the Q-function. Such errors are common in RL when learning the Q-value from finite experience data. Greedy policies or even constrained policy optimization algorithms that ignore these errors may suffer from an improvement penalty (i.e., a policy impairment). To reduce the penalty, the idea of RBI is to attenuate rapid policy changes to actions that were rarely sampled. This approach is shown to avoid catastrophic performance degradation and reduce regret when learning from a batch of transition samples. Through a two-armed bandit example, we show that it also increases data efficiency when the optimal action has a high variance. We evaluate RBI in two tasks in the Atari Learning Environment: (1) learning from observations of multiple behavior policies and (2) iterative RL. Our results demonstrate the advantage of RBI over greedy policies and other constrained policy optimization algorithms both in learning from observations and in RL tasks.
Keywords:
Machine Learning: Reinforcement Learning
Machine Learning: Deep Reinforcement Learning