Stochastic Second-Order Method for Large-Scale Nonconvex Sparse Learning Models

Stochastic Second-Order Method for Large-Scale Nonconvex Sparse Learning Models

Hongchang Gao, Heng Huang

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 2128-2134. https://doi.org/10.24963/ijcai.2018/294

Sparse learning models have shown promising performance in the high dimensional machine learning applications. The main challenge of sparse learning models is how to optimize it efficiently. Most existing methods solve this problem by relaxing it as a convex problem, incurring large estimation bias. Thus, the sparse learning model with nonconvex constraint has attracted much attention due to its better performance. But it is difficult to optimize due to the non-convexity. In this paper, we propose a linearly convergent stochastic second-order method to optimize this nonconvex problem for large-scale datasets. The proposed method incorporates second-order information to improve the convergence speed. Theoretical analysis shows that our proposed method enjoys linear convergence rate and guarantees to converge to the underlying true model parameter. Experimental results have verified the efficiency and correctness of our proposed method.
Keywords:
Machine Learning: Feature Selection ; Learning Sparse Models
Machine Learning Applications: Big data ; Scalability