Abstract

Proceedings Abstracts of the Twenty-Fourth International Joint Conference on Artificial Intelligence

Speeding Up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves / 3460
Tobias Domhan, Jost Tobias Springenberg, Frank Hutter
PDF

Deep neural networks (DNNs) show very strong performance on many machine learning problems, but they are very sensitive to the setting of their hyperparameters. Automated hyperparameter optimization methods have recently been shown to yield settings competitive with those found by human experts, but their widespread adoption is hampered by the fact that they require more computational resources than human experts. Humans have one advantage: when they evaluate a poor hyperparameter setting they can quickly detect (after a few SGD steps) that the resulting network performs poorly and terminate the corresponding evaluation to save time. Here, we mimic this early termination of bad runs based on a probabilistic model that extrapolates performance from the first part of a learning curve. Experiments with different neural network architectures show that our resulting approach speeds up state-of-the-art hyperparameter optimization methods for DNNs roughly twofold, enabling them to find DNN settings that yield better performance than those chosen by human experts.