Penalized Proximal Policy Optimization for Safe Reinforcement Learning

Penalized Proximal Policy Optimization for Safe Reinforcement Learning

Linrui Zhang, Li Shen, Long Yang, Shixiang Chen, Xueqian Wang, Bo Yuan, Dacheng Tao

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3744-3750. https://doi.org/10.24963/ijcai.2022/520

Safe reinforcement learning aims to learn the optimal policy while satisfying safety constraints, which is essential in real-world applications. However, current algorithms still struggle for efficient policy updates with hard constraint satisfaction. In this paper, we propose Penalized Proximal Policy Optimization (P3O), which solves the cumbersome constrained policy iteration via a single minimization of an equivalent unconstrained problem. Specifically, P3O utilizes a simple yet effective penalty approach to eliminate cost constraints and removes the trust-region constraint by the clipped surrogate objective. We theoretically prove the exactness of the penalized method with a finite penalty factor and provide a worst-case analysis for approximate error when evaluated on sample trajectories. Moreover, we extend P3O to more challenging multi-constraint and multi-agent scenarios which are less studied in previous work. Extensive experiments show that P3O outperforms state-of-the-art algorithms with respect to both reward improvement and constraint satisfaction on a set of constrained locomotive tasks.
Keywords:
Machine Learning: Deep Reinforcement Learning
Constraint Satisfaction and Optimization: Constraint Optimization
Agent-based and Multi-agent Systems: Multi-agent Learning