Towards Applicable Reinforcement Learning: Improving the Generalization and Sample Efficiency with Policy Ensemble

Towards Applicable Reinforcement Learning: Improving the Generalization and Sample Efficiency with Policy Ensemble

Zhengyu Yang, Kan Ren, Xufang Luo, Minghuan Liu, Weiqing Liu, Jiang Bian, Weinan Zhang, Dongsheng Li

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3659-3665. https://doi.org/10.24963/ijcai.2022/508

It is challenging for reinforcement learning (RL) algorithms to succeed in real-world applications. Take financial trading as an example, the market information is noisy yet imperfect and the macroeconomic regulation or other factors may shift between training and evaluation, thus it requires both generalization and high sample efficiency for resolving the task. However, directly applying typical RL algorithms can lead to poor performance in such scenarios. To derive a robust and applicable RL algorithm, in this work, we design a simple but effective method named Ensemble Proximal Policy Optimization (EPPO), which learns ensemble policies in an end-to-end manner. Notably, EPPO combines each policy and the policy ensemble organically and optimizes both simultaneously. In addition, EPPO adopts a diversity enhancement regularization over the policy space which helps to generalize to unseen states and promotes exploration. We theoretically prove that EPPO can increase exploration efficacy, and through comprehensive experimental evaluations on various tasks, we demonstrate that EPPO achieves higher efficiency and is robust for real-world applications compared with vanilla policy optimization algorithms and other ensemble methods. Code and supplemental materials are available at https://seqml.github.io/eppo.
Keywords:
Machine Learning: Deep Reinforcement Learning
Machine Learning: Ensemble Methods