Combining ADMM and the Augmented Lagrangian Method for Efficiently Handling Many Constraints

Combining ADMM and the Augmented Lagrangian Method for Efficiently Handling Many Constraints

Joachim Giesen, Soeren Laue

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 4525-4531. https://doi.org/10.24963/ijcai.2019/629

Many machine learning methods entail minimizing a loss-function that is the sum of the losses for each data point. The form of the loss function is exploited algorithmically, for instance in stochastic gradient descent (SGD) and in the alternating direction method of multipliers (ADMM). However, there are also machine learning methods where the entailed optimization problem features the data points not in the objective function but in the form of constraints, typically one constraint per data point. Here, we address the problem of solving convex optimization problems with many convex constraints. Our approach is an extension of ADMM. The straightforward implementation of ADMM for solving constrained optimization problems in a distributed fashion solves constrained subproblems on different compute nodes that are aggregated until a consensus solution is reached. Hence, the straightforward approach has three nested loops: one for reaching consensus, one for the constraints, and one for the unconstrained problems. Here, we show that solving the costly constrained subproblems can be avoided. In our approach, we combine the ability of ADMM to solve convex optimization problems in a distributed setting with the ability of the augmented Lagrangian method to solve constrained optimization problems. Consequently, our algorithm only needs two nested loops. We prove that it inherits the convergence guarantees of both ADMM and the augmented Lagrangian method. Experimental results corroborate our theoretical findings.
Keywords:
Machine Learning Applications: Big data ; Scalability
Agent-based and Multi-agent Systems: Multi-agent Learning