Towards High-Level Intrinsic Exploration in Reinforcement Learning

Towards High-Level Intrinsic Exploration in Reinforcement Learning

Nicolas Bougie, Ryutaro Ichise

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Doctoral Consortium. Pages 5186-5187. https://doi.org/10.24963/ijcai.2020/733

Deep reinforcement learning (DRL) methods traditionally struggle with tasks where environment rewards are sparse or delayed, which entails that exploration remains one of the key challenges of DRL. Instead of solely relying on extrinsic rewards, many state-of-the-art methods use intrinsic curiosity as exploration signal. While they hold promise of better local exploration, discovering global exploration strategies is beyond the reach of current methods. We propose a novel end-to-end intrinsic reward formulation that introduces high-level exploration in reinforcement learning. Our curiosity signal is driven by a fast reward that deals with local exploration and a slow reward that incentivizes long-time horizon exploration strategies. We formulate curiosity as the error in an agent’s ability to reconstruct the observations given their contexts. Experimental results show that this high-level exploration enables our agents to outperform prior work in several Atari games.
Keywords:
Machine Learning: Deep Reinforcement Learning
Machine Learning: Reinforcement Learning
Agent-based and Multi-agent Systems: Other