DeepMellow: Removing the Need for a Target Network in Deep Q-Learning

DeepMellow: Removing the Need for a Target Network in Deep Q-Learning

Seungchan Kim, Kavosh Asadi, Michael Littman, George Konidaris

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2733-2739. https://doi.org/10.24963/ijcai.2019/379

Deep Q-Network (DQN) is an algorithm that achieves human-level performance in complex domains like Atari games. One of the important elements of DQN is its use of a target network, which is necessary to stabilize learning. We argue that using a target network is incompatible with online reinforcement learning, and it is possible to achieve faster and more stable learning without a target network when we use Mellowmax, an alternative softmax operator. We derive novel properties of Mellowmax, and empirically show that the combination of DQN and Mellowmax, but without a target network, outperforms DQN with a target network.
Keywords:
Machine Learning: Reinforcement Learning
Uncertainty in AI: Sequential Decision Making