Analysis of Q-learning with Adaptation and Momentum Restart for Gradient Descent

Analysis of Q-learning with Adaptation and Momentum Restart for Gradient Descent

Bowen Weng, Huaqing Xiong, Yingbin Liang, Wei Zhang

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 3051-3057. https://doi.org/10.24963/ijcai.2020/422

Existing convergence analyses of Q-learning mostly focus on the vanilla stochastic gradient descent (SGD) type of updates. Despite the Adaptive Moment Estimation (Adam) has been commonly used for practical Q-learning algorithms, there has not been any convergence guarantee provided for Q-learning with such type of updates. In this paper, we first characterize the convergence rate for Q-AMSGrad, which is the Q-learning algorithm with AMSGrad update (a commonly adopted alternative of Adam for theoretical analysis). To further improve the performance, we propose to incorporate the momentum restart scheme to Q-AMSGrad, resulting in the so-called Q-AMSGradR algorithm. The convergence rate of Q-AMSGradR is also established. Our experiments on a linear quadratic regulator problem demonstrate that the two proposed Q-learning algorithms outperform the vanilla Q-learning with SGD updates. The two algorithms also exhibit significantly better performance than the DQN learning method over a batch of Atari 2600 games.
Keywords:
Machine Learning: Reinforcement Learning
Machine Learning: Deep Reinforcement Learning
Machine Learning: Deep Learning
Constraints and SAT: Constraint Optimization