Distributed Self-Paced Learning in Alternating Direction Method of Multipliers

Distributed Self-Paced Learning in Alternating Direction Method of Multipliers

Xuchao Zhang, Liang Zhao, Zhiqian Chen, Chang-Tien Lu

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 3148-3154. https://doi.org/10.24963/ijcai.2018/437

Self-paced learning (SPL) mimics the cognitive process of humans, who generally learn from easy samples to hard ones. One key issue in SPLĀ is the training process required for each instance weight depends on the other samples and thus cannot easily be run in a distributed manner in a large-scale dataset. In this paper, we reformulate the self-paced learning problem into a distributed setting and propose a novel Distributed Self-Paced Learning method (DSPL) to handle large scale datasets. Specifically, both the model and instance weights can be optimized in parallel for each batch based on a consensus alternating direction method of multipliers. We also prove the convergence of our algorithm under mild conditions. Extensive experiments on both synthetic and real datasets demonstrate that our approach is superior to those of existing methods.
Keywords:
Machine Learning: Machine Learning
Constraints and SAT: Constraint Optimisation
Heuristic Search and Game Playing: Distributed Search