Exploring Parameter Space with Structured Noise for Meta-Reinforcement Learning

Exploring Parameter Space with Structured Noise for Meta-Reinforcement Learning

Hui Xu, Chong Zhang, Jiaxing Wang, Deqiang Ouyang, Yu Zheng, Jie Shao

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 3153-3159. https://doi.org/10.24963/ijcai.2020/436

Efficient exploration is a major challenge in Reinforcement Learning (RL) and has been studied extensively. However, for a new task existing methods explore either by taking actions that maximize task agnostic objectives (such as information gain) or applying a simple dithering strategy (such as noise injection), which might not be effective enough. In this paper, we investigate whether previous learning experiences can be leveraged to guide exploration of current new task. To this end, we propose a novel Exploration with Structured Noise in Parameter Space (ESNPS) approach. ESNPS utilizes meta-learning and directly uses meta-policy parameters, which contain prior knowledge, as structured noises to perturb the base model for effective exploration in new tasks. Experimental results on four groups of tasks: cheetah velocity, cheetah direction, ant velocity and ant direction demonstrate the superiority of ESNPS against a number of competitive baselines.
Keywords:
Machine Learning: Deep Reinforcement Learning