SVD-free Convex-Concave Approaches for Nuclear Norm Regularization

SVD-free Convex-Concave Approaches for Nuclear Norm Regularization

Yichi Xiao, Zhe Li, Tianbao Yang, Lijun Zhang

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 3126-3132. https://doi.org/10.24963/ijcai.2017/436

Minimizing a convex function of matrices regularized by the nuclear norm arises in many applications such as collaborative filtering and multi-task learning. In this paper, we study the general setting where the convex function could be non-smooth. When the size of the data matrix, denoted by m x n, is very large, existing optimization methods are inefficient because in each iteration, they need to perform a singular value decomposition (SVD) which takes O(m^2 n) time. To reduce the computation cost, we exploit the dual characterization of the nuclear norm to introduce a convex-concave optimization problem and design a subgradient-based algorithm without performing SVD. In each iteration, the proposed algorithm only computes the largest singular vector, reducing the time complexity from O(m^2 n) to O(mn). To the best of our knowledge, this is the first SVD-free convex optimization approach for nuclear-norm regularized problems that does not rely on the smoothness assumption. Theoretical analysis shows that the proposed algorithm converges at an optimal O(1/\sqrt{T}) rate where T is the number of iterations. We also extend our algorithm to the stochastic case where only stochastic subgradients of the convex function are available and a special case that contains an additional non-smooth regularizer (e.g., L1 norm regularizer). We conduct experiments on robust low-rank matrix approximation and link prediction to demonstrate the efficiency of our algorithms.
Keywords:
Machine Learning: Machine Learning
Constraints and Satisfiability: Solvers and Tools