DEIR: Efficient and Robust Exploration through Discriminative-Model-Based Episodic Intrinsic Rewards

DEIR: Efficient and Robust Exploration through Discriminative-Model-Based Episodic Intrinsic Rewards

Shanchuan Wan, Yujin Tang, Yingtao Tian, Tomoyuki Kaneko

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 4289-4298. https://doi.org/10.24963/ijcai.2023/477

Exploration is a fundamental aspect of reinforcement learning (RL), and its effectiveness is a deciding factor in the performance of RL algorithms, especially when facing sparse extrinsic rewards. Recent studies have shown the effectiveness of encouraging exploration with intrinsic rewards estimated from novelties in observations. However, there is a gap between the novelty of an observation and an exploration, as both the stochasticity in the environment and the agent's behavior may affect the observation. To evaluate exploratory behaviors accurately, we propose DEIR, a novel method in which we theoretically derive an intrinsic reward with a conditional mutual information term that principally scales with the novelty contributed by agent explorations, and then implement the reward with a discriminative forward model. Extensive experiments on both standard and advanced exploration tasks in MiniGrid show that DEIR quickly learns a better policy than the baselines. Our evaluations on ProcGen demonstrate both the generalization capability and the general applicability of our intrinsic reward.
Keywords:
Machine Learning: ML: Reinforcement learning