Tight Convergence Rate of Gradient Descent for Eigenvalue Computation

Tight Convergence Rate of Gradient Descent for Eigenvalue Computation

Qinghua Ding, Kaiwen Zhou, James Cheng

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 3276-3282. https://doi.org/10.24963/ijcai.2020/453

Riemannian gradient descent (RGD) is a simple, popular and efficient algorithm for leading eigenvector computation [AMS08]. However, the existing analysis of RGD for eigenproblem is still not tight, which is O(log(n/epsilon)/Delta^2) due to [Xu et al., 2018]. In this paper, we show that RGD in fact converges at rate O(log(n/epsilon)/Delta), and give instances to shows the tightness of our result. This improves the best prior analysis by a quadratic factor. Besides, we also give tight convergence analysis of a deterministic variant of Oja's rule due to [Oja, 1982]. We show that it also enjoys fast convergence rate of O(log(n/epsilon)/Delta). Previous papers only gave asymptotic characterizations [Oja, 1982; Oja, 1989; Yi et al., 2005]. Our tools for proving convergence results include an innovative reduction and chaining technique, and a noisy fixed point iteration argument. Besides, we also give empirical justifications of our convergence rates over synthetic and real data.
Keywords:
Machine Learning: Dimensionality Reduction and Manifold Learning