Maximum Entropy Softmax Policy Gradient via Entropy Advantage Estimation

Maximum Entropy Softmax Policy Gradient via Entropy Advantage Estimation

Jean Seong Bjorn Choe, Jong-kook Kim

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 4958-4966. https://doi.org/10.24963/ijcai.2025/552

Entropy Regularisation is a widely adopted technique that enhances policy optimisation performance and stability. Maximum entropy reinforcement learning (MaxEnt RL) regularises policy evaluation by augmenting the objective with an entropy term, showing theoretical benefits in policy optimisation. However, its practical application in straightforward direct policy gradient settings remains surprisingly underexplored. We hypothesise that this is due to the difficulty of managing the entropy reward in practice. This paper proposes Entropy Advantage Policy Optimisation (EAPO), a simple method that facilitates MaxEnt RL implementation by separately estimating task and entropy objectives. Our empirical evaluations demonstrate that extending Proximal Policy Optimisation (PPO) and Trust Region Policy Optimisation (TRPO) within the MaxEnt framework improves optimisation performance, generalisation, and exploration in various environments. Moreover, our method provides a stable and performant MaxEnt RL algorithm for discrete action spaces.
Keywords:
Machine Learning: ML: Reinforcement learning
Machine Learning: ML: Optimization
Uncertainty in AI: UAI: Decision and utility theory