Experience Replay Optimization

Experience Replay Optimization

Daochen Zha, Kwei-Herng Lai, Kaixiong Zhou, Xia Hu

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 4243-4249. https://doi.org/10.24963/ijcai.2019/589

Experience replay enables reinforcement learning agents to memorize and reuse past experiences, just as humans replay memories for the situation at hand. Contemporary off-policy algorithms either replay past experiences uniformly or utilize a rule-based replay strategy, which may be sub-optimal. In this work, we consider learning a replay policy to optimize the cumulative reward. Replay learning is challenging because the replay memory is noisy and large, and the cumulative reward is unstable. To address these issues, we propose a novel experience replay optimization (ERO) framework which alternately updates two policies: the agent policy, and the replay policy. The agent is updated to maximize the cumulative reward based on the replayed data, while the replay policy is updated to provide the agent with the most useful experiences. The conducted experiments on various continuous control tasks demonstrate the effectiveness of ERO, empirically showing promise in experience replay learning to improve the performance of off-policy reinforcement learning algorithms.
Keywords:
Machine Learning: Reinforcement Learning
Machine Learning: Deep Learning