Hashing over Predicted Future Frames for Informed Exploration of Deep Reinforcement Learning

Hashing over Predicted Future Frames for Informed Exploration of Deep Reinforcement Learning

Haiyan Yin, Jianda Chen, Sinno Jialin Pan

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 3026-3032. https://doi.org/10.24963/ijcai.2018/420

In deep reinforcement learning (RL) tasks, an efficient exploration mechanism should be able to encourage an agent to take actions that lead to less frequent states which may yield higher accumulative future return. However, both knowing about the future and evaluating the frequentness of states are non-trivial tasks, especially for deep RL domains, where a state is represented by high-dimensional image frames. In this paper, we propose a novel informed exploration framework for deep RL, where we build the capability for an RL agent to predict over the future transitions and evaluate the frequentness for the predicted future frames in a meaningful manner. To this end, we train a deep prediction model to predict future frames given a state-action pair, and a convolutional autoencoder model to hash over the seen frames. In addition, to utilize the counts derived from the seen frames to evaluate the frequentness for the predicted frames, we tackle the challenge of matching the predicted future frames and their corresponding seen frames at the latent feature level. In this way, we derive a reliable metric for evaluating the novelty of the future direction pointed by each action, and hence inform the agent to explore the least frequent one.
Keywords:
Machine Learning: Deep Learning
Machine Learning Applications: Applications of Reinforcement Learning