Jump to Content

Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization

Liam Li
Kevin Jamieson
Ameet Talwalkar
Journal of Machine Learning Research, vol. 18-185 (2018), pp. 1-52

Abstract

Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While recent approaches use Bayesian optimization to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation and early-stopping. We formulate hyperparameter optimization as a pure-exploration non-stochastic infinite-armed bandit problem where a predefined resource like iterations, data samples, or features is allocated to randomly sampled configurations. We introduce a novel algorithm, øuralg , for this framework and analyze its theoretical properties, providing several desirable guarantees. Furthermore, we compare øuralg with popular Bayesian optimization methods on a suite of hyperparameter optimization problems. We observe that øuralg can provide over an order-of-magnitude speedup over our competitor set on a variety of deep-learning and kernel-based learning problems.

Research Areas