Self-Guided Evolution Strategies with Historical Estimated Gradients

Self-Guided Evolution Strategies with Historical Estimated Gradients

Fei-Yu Liu, Zi-Niu Li, Chao Qian

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 1474-1480. https://doi.org/10.24963/ijcai.2020/205

Evolution Strategies (ES) are a class of black-box optimization algorithms and have been widely applied to solve problems, e.g., in reinforcement learning (RL), where the true gradient is unavailable. ES estimate the gradient of an objective function with respect to the parameters by randomly sampling search directions and evaluating parameter perturbations in these directions. However, the gradient estimator of ES tends to have a high variance for high-dimensional optimization, thus requiring a large number of samples and making ES inefficient. In this paper, we propose a new ES algorithm SGES, which utilizes historical estimated gradients to construct a low-dimensional subspace for sampling search directions, and adjusts the importance of this subspace adaptively. We prove that the variance of the gradient estimator of SGES can be much smaller than that of Vanilla ES; meanwhile, its bias can be well bounded. Empirical results on benchmark black-box functions and a set of popular RL tasks exhibit the superior performance of SGES over state-of-the-art ES algorithms.
Keywords:
Heuristic Search and Game Playing: Heuristic Search and Machine Learning
Heuristic Search and Game Playing: Heuristic Search
Machine Learning: Reinforcement Learning