A Generic Approach for Accelerating Stochastic Zeroth-Order Convex Optimization

A Generic Approach for Accelerating Stochastic Zeroth-Order Convex Optimization

Xiaotian Yu, Irwin King, Michael R. Lyu, Tianbao Yang

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 3040-3046. https://doi.org/10.24963/ijcai.2018/422

In this paper, we propose a generic approach for accelerating the convergence of existing algorithms to solve the problem of stochastic zeroth-order convex optimization (SZCO). Standard techniques for accelerating the convergence of stochastic zeroth-order algorithms are by exploring multiple functional evaluations (e.g., two-point evaluations), or by exploiting global conditions of the problem (e.g., smoothness and strong convexity). Nevertheless, these classic acceleration techniques are necessarily restricting the applicability of newly developed algorithms. The key of our proposed generic approach is to explore a local growth condition  (or called local error bound condition) of the objective function in SZCO. The benefits of the proposed acceleration technique are: (i) it is applicable to both settings with one-point evaluation and two-point evaluations; (ii) it does not necessarily require strong convexity or smoothness condition of the objective function; (iii) it yields an improvement on convergence for a broad family of problems. Empirical studies in various settings demonstrate the effectiveness of the proposed acceleration approach.
Keywords:
Machine Learning: Machine Learning
Uncertainty in AI: Uncertainty in AI