Soft Policy Gradient Method for Maximum Entropy Deep Reinforcement Learning

Soft Policy Gradient Method for Maximum Entropy Deep Reinforcement Learning

Wenjie Shi, Shiji Song, Cheng Wu

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 3425-3431. https://doi.org/10.24963/ijcai.2019/475

Maximum entropy deep reinforcement learning (RL) methods have been demonstrated on a range of challenging continuous tasks. However, existing methods either suffer from severe instability when training on large off-policy data or cannot scale to tasks with very high state and action dimensionality such as 3D humanoid locomotion. Besides, the optimality of desired Boltzmann policy set for non-optimal soft value function is not persuasive enough. In this paper, we first derive soft policy gradient based on entropy regularized expected reward objective for RL with continuous actions. Then, we present an off-policy actor-critic, model-free maximum entropy deep RL algorithm called deep soft policy gradient (DSPG) by combining soft policy gradient with soft Bellman equation. To ensure stable learning while eliminating the need of two separate critics for soft value functions, we leverage double sampling approach to making the soft Bellman equation tractable. The experimental results demonstrate that our method outperforms in performance over off-policy prior methods.
Keywords:
Machine Learning: Reinforcement Learning
Machine Learning Applications: Applications of Reinforcement Learning