A Solver + Gradient Descent Training Algorithm for Deep Neural Networks

A Solver + Gradient Descent Training Algorithm for Deep Neural Networks

Dhananjay Ashok, Vineel Nagisetty, Christopher Srinivasa, Vijay Ganesh

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 1766-1773. https://doi.org/10.24963/ijcai.2022/246

We present a novel hybrid algorithm for training Deep Neural Networks that combines the state-of-the-art Gradient Descent (GD) method with a Mixed Integer Linear Programming (MILP) solver, outperforming GD and variants in terms of accuracy, as well as resource and data efficiency for both regression and classification tasks. Our GD+Solver hybrid algorithm, called GDSolver, works as follows: given a DNN D as input, GDSolver invokes GD to partially train D until it gets stuck in a local minima, at which point GDSolver invokes an MILP solver to exhaustively search a region of the loss landscape around the weight assignments of D’s final layer parameters with the goal of tunnelling through and escaping the local minima. The process is repeated until desired accuracy is achieved. In our experiments, we find that GDSolver not only scales well to additional data and very large model sizes, but also outperforms all other competing methods in terms of rates of convergence and data efficiency. For regression tasks, GDSolver produced models that, on average, had 31.5% lower MSE in 48% less time, and for classification tasks on MNIST and CIFAR10, GDSolver was able to achieve the highest accuracy over all competing methods, using only 50% of the training data that GD baselines required.
Keywords:
Constraint Satisfaction and Optimization: Solvers and Tools
Constraint Satisfaction and Optimization: Constraints and Machine Learning
Constraint Satisfaction and Optimization: Constraint Programming
Constraint Satisfaction and Optimization: Constraint Satisfaction
Constraint Satisfaction and Optimization: Constraint Optimization