Skip to content

Latest commit

 

History

History
35 lines (18 loc) · 2.06 KB

README.md

File metadata and controls

35 lines (18 loc) · 2.06 KB

TPE

Introduction

Algorithms for Hyper-Parameter Optimization

Abstract

Several recent advances to the state of the art in image classification benchmarks have come from better configurations of existing techniques rather than novel approaches to feature learning. Traditionally, hyper-parameter optimization has been the job of humans because they can be very efficient in regimes where only a few trials are possible. Presently, computer clusters and GPU processors make it possible to run more trials and we show that algorithmic approaches can find better results. We present hyper-parameter optimization results on tasks of training neural networks and deep belief networks (DBNs). We optimize hyper-parameters using random search and two new greedy sequential methods based on the expected improvement criterion. Random search has been shown to be sufficiently efficient for learning neural networks for several datasets, but we show it is unreliable for training DBNs. The sequential algorithms are applied to the most difficult DBN learning problems from [Larochelle et al., 2007] and find significantly better results than the best previously reported. This work contributes novel techniques for making response surface models P (y|x) in which many elements of hyper-parameter assignment (x) are known to be irrelevant given particular values of other elements.

Usage

E.g. PYTHONPATH='./' python examples/TPE/rosenbrock_tpe.py

benchmark

Modify the following section of comparison/xbbo_benchmark.py :

test_algs = ["tpe"]

And run PYTHONPATH='./' python comparison/xbbo_benchmark.py in the command line.

Results

Branin

Method Minimum Best minimum Mean f_calls to min Std f_calls to min Fastest f_calls to min
XBBO(tpe) 0.767+/-0.306 0.406 156.8 63.038 4