SIFAR: A Simple Faster Accelerated Variance-Reduced Gradient Method

SIFAR: A Simple Faster Accelerated Variance-Reduced Gradient Method

Zhize Li

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 5662-5670. https://doi.org/10.24963/ijcai.2025/630

In this paper, we propose a simple faster accelerated gradient method called SIFAR for solving the finite-sum optimization problems. Concretely, we consider both general convex and strongly convex settings: i) For general convex finite-sum problems, SIFAR improves previous state-of-the-art result given by Varag. In particular, for large-scale problems or the convergence error is not very small, SIFAR obtains the first optimal result O(n), matching the lower bound. ii) For strongly convex finite-sum problems, we also show that SIFAR can achieve the optimal convergence rate matching the lower bound. Besides, SIFAR enjoys a simpler loopless algorithmic structure while previous algorithms use double-loop structures. Moreover, we provide a novel dynamic multi-stage convergence analysis, which is the key for improving previous results to the optimal rates. Our new theoretical rates and novel convergence analysis for the fundamental finite-sum problem can directly lead to key improvements for many other related problems, such as distributed/federated/decentralized optimization problems. Finally, the numerical experiments show that SIFAR converges faster than the previous state-of-the-art Varag, validating our theoretical results and confirming the practical superiority of SIFAR.
Keywords:
Machine Learning: ML: Optimization
Data Mining: DM: Big data and scalability
Machine Learning: ML: Supervised Learning