Reinforcement Learning with Dynamic Boltzmann Softmax Updates
Reinforcement Learning with Dynamic Boltzmann Softmax Updates
Ling Pan, Qingpeng Cai, Qi Meng, Wei Chen, Longbo Huang
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 1992-1998.
https://doi.org/10.24963/ijcai.2020/276
Value function estimation is an important task in reinforcement learning, i.e., prediction. The Boltzmann softmax operator is a natural value estimator and can provide several benefits. However, it does not satisfy the non-expansion property, and its direct use may fail to converge even in value iteration. In this paper, we propose to update the value function with dynamic Boltzmann softmax (DBS) operator, which has good convergence property in the setting of planning and learning. Experimental results on GridWorld show that the DBS operator enables better estimation of the value function, which rectifies the convergence issue of the softmax operator. Finally, we propose the DBS-DQN algorithm by applying the DBS operator, which outperforms DQN substantially in 40 out of 49 Atari games.
Keywords:
Machine Learning: Deep Reinforcement Learning
Machine Learning: Reinforcement Learning