Skip to content

Commit add2bae

Browse files
committed
joss update
1 parent 8df696b commit add2bae

File tree

3 files changed

+159
-1
lines changed

3 files changed

+159
-1
lines changed

README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
# Hyperas [![Build Status](https://travis-ci.org/maxpumperla/hyperas.svg?branch=master)](https://travis-ci.org/maxpumperla/hyperas) [![PyPI version](https://badge.fury.io/py/hyperas.svg)](https://badge.fury.io/py/hyperas)
2-
A very simple convenience wrapper around hyperopt for fast prototyping with keras models. Hyperas lets you use the power of hyperopt without having to learn the syntax of it. Instead, just define your keras model as you are used to, but use a simple template notation to define hyper-parameter ranges to tune.
2+
Hyperas brings fast experimentation with Keras and hyperparameter optimization with Hyperopt together.
3+
It lets you use the power of hyperopt without having to learn the syntax of it.
4+
Instead, just define your keras model as you are used to, but use a simple template notation to define hyper-parameter ranges to tune.
35

46
## Installation
57
```python

paper.bib

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
@InProceedings{pmlr-v28-bergstra13,
2+
title = {Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures},
3+
author = {Bergstra, James and Yamins, Daniel and Cox, David},
4+
booktitle = {Proceedings of the 30th International Conference on Machine Learning},
5+
pages = {115--123},
6+
year = {2013},
7+
editor = {Dasgupta, Sanjoy and McAllester, David},
8+
volume = {28},
9+
number = {1},
10+
series = {Proceedings of Machine Learning Research},
11+
address = {Atlanta, Georgia, USA},
12+
month = {17--19 Jun},
13+
publisher = {PMLR},
14+
pdf = {http://proceedings.mlr.press/v28/bergstra13.pdf},
15+
url = {https://proceedings.mlr.press/v28/bergstra13.html},
16+
abstract = {Many computer vision algorithms depend on configuration settings that are typically hand-tuned in the course of evaluating the algorithm for a particular data set. While such parameter tuning is often presented as being incidental to the algorithm, correctly setting these parameter choices is frequently critical to realizing a method’s full potential. Compounding matters, these parameters often must be re-tuned when the algorithm is applied to a new problem domain, and the tuning process itself often depends on personal experience and intuition in ways that are hard to quantify or describe. Since the performance of a given technique depends on both the fundamental quality of the algorithm and the details of its tuning, it is sometimes difficult to know whether a given technique is genuinely better, or simply better tuned. In this work, we propose a meta-modeling approach to support automated hyperparameter optimization, with the goal of providing practical tools that replace hand-tuning with a reproducible and unbiased optimization process. Our approach is to expose the underlying expression graph of how a performance metric (e.g. classification accuracy on validation examples) is computed from hyperparameters that govern not only how individual processing steps are applied, but even which processing steps are included. A hyperparameter optimization algorithm transforms this graph into a program for optimizing that performance metric. Our approach yields state of the art results on three disparate computer vision problems: a face-matching verification task (LFW), a face identification task (PubFig83) and an object recognition task (CIFAR-10), using a single broad class of feed-forward vision architectures. }
17+
}
18+
19+
@online{bergstra2012hyperopt,
20+
title={Hyperot},
21+
author={Bergstra, James and others},
22+
year={2012},
23+
publisher={GitHub},
24+
url={https://github.com/hyperopt/hyperopt},
25+
}
26+
27+
@online{chollet2015keras,
28+
title={Keras},
29+
author={Chollet, Francois and others},
30+
year={2015},
31+
publisher={GitHub},
32+
url={https://github.com/fchollet/keras},
33+
}
34+
35+
@online{jinja2008,
36+
title={Jinja},
37+
author={Ronacher, Armin and others},
38+
year={2008},
39+
publisher={GitHub},
40+
url={https://github.com/pallets/jinja},
41+
}
42+
43+
@misc{akiba2019optuna,
44+
title={Optuna: A Next-generation Hyperparameter Optimization Framework},
45+
author={Takuya Akiba and Shotaro Sano and Toshihiko Yanase and Takeru Ohta and Masanori Koyama},
46+
year={2019},
47+
eprint={1907.10902},
48+
archivePrefix={arXiv},
49+
primaryClass={cs.LG}
50+
}
51+
52+
@misc{omalley2019kerastuner,
53+
title = {KerasTuner},
54+
author = {O'Malley, Tom and Bursztein, Elie and Long, James and Chollet, Fran\c{c}ois and Jin, Haifeng and Invernizzi, Luca and others},
55+
year = 2019,
56+
howpublished = {\url{https://github.com/keras-team/keras-tuner}}
57+
}

paper.md

Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
---
2+
title: 'Hyperas: Simple Hyperparameter Tuning for Keras Models'
3+
tags:
4+
- Python
5+
- Hyperparameter Tuning
6+
- Deep Learning
7+
- Keras
8+
- Hyperopt
9+
authors:
10+
- name: Max Pumperla
11+
affiliation: "1, 2"
12+
affiliations:
13+
- name: IU Internationale Hochschule
14+
index: 1
15+
- name: Pathmind Inc.
16+
index: 2
17+
date: 19 November 2021
18+
bibliography: paper.bib
19+
20+
---
21+
22+
# Summary
23+
24+
Hyperas is an extension of [Keras](https://keras.io/) [@chollet2015keras], which allows you to run hyperparameter optimization of your models using [Hyperopt](http://hyperopt.github.io/hyperopt/) [@bergstra2012hyperopt].
25+
It was built to enable fast experimentation cycles for researchers and software developers.
26+
With hyperas, you can set up your Keras models as you're used to and specify your hyperparameter search spaces in a convenient way, following the design principles suggested by the [Jinja project](https://jinja.palletsprojects.com/en/3.0.x/) [@jinja2008].
27+
28+
With hyperas, researchers can use the full power of hyperopt without sacrificing experimentation speed.
29+
Its documentation is hosted on [GitHub](https://github.com/maxpumperla/hyperas) and comes with suite of [examples]https://github.com/maxpumperla/hyperas/tree/master/examples) to get users started.
30+
31+
32+
# Statement of need
33+
34+
Hyperas is in active use in the Python community and still has [thousands of weekly downloads](https://pypistats.org/packages/hyperas), which shows a clear need for this experimentation library.
35+
Over the years, hyperas has been used and cited by [research papers](https://scholar.google.com/scholar?cluster=1375058734373368171&hl=en&oi=scholarr), mostly by [referring to Github](https://scholar.google.com/scholar?hl=de&as_sdt=0%2C5&q=hyperas+keras&btnG=).
36+
Researchers that want to focus on their deep learning model definitions don't get bogged down by maintaining separate hyperparameter search spaces and configurations and can leverage hyperas to speed up their experiments.
37+
After hyperas has been published, tools like Optuna [@akiba2019optuna] have adopted a similar approach to hyperparameter tuning.
38+
KerasTuner [@omalley2019kerastuner] is officially supported by Keras itself, but does not have the same variety of hyperparameter search algorithms as hyperas.
39+
40+
# Design and API
41+
42+
Hyperas uses a Jinja-style template language to define search spaces implicitly in Keras model specifications.
43+
Essentially, regular configuration values in a Keras layer, such as `Dropout(0.2)` get replaced by a [suitable distribution](https://github.com/maxpumperla/hyperas/blob/master/hyperas/distributions.py) like `Dropout({{uniform(0, 1)}})`.
44+
To define a hyperas model, you proceed in two steps.
45+
First, you set up a function that returns the data you want to train on, which could include features and labels for training, validation and test sets.
46+
Schematically this would look as follows:
47+
48+
```python
49+
def data():
50+
# Load your data here
51+
return x_train, y_train, x_test, y_test
52+
```
53+
54+
Next, you have to specify a function that takes your data as input arguments, defines a Keras model with hyperas template handles (`{{}}`), fits the model to your data and returns a dictionary that has to at least contain a `loss` value to be minimized by hyperopt, e.g. validation loss or the negative of test accuracy, and the hyperopt `status` of the experiment.
55+
56+
```python
57+
from hyperas.distributions import uniform
58+
from hyperopt import STATUS_OK
59+
60+
61+
def create_model(x_train, y_train, x_test, y_test):
62+
model = Sequential()
63+
model.add(Dense(512, input_shape=(784,)))
64+
model.add(Activation('relu'))
65+
model.add(Dropout({{uniform(0, 1)}}))
66+
# ... add more layers
67+
model.add(Dense(10))
68+
model.add(Activation('softmax'))
69+
70+
# fit model
71+
model.fit(x_train, y_train, ...)
72+
73+
# evaluate model and return loss
74+
score = model.evaluate(x_test, y_test, verbose=0)
75+
accuracy = score[1]
76+
return {'loss': -accuracy, 'status': STATUS_OK, 'model': model}
77+
```
78+
79+
Lastly, you simply prompt the `optim` module of hyperas to `minimize` your model loss defined in `create_function`, using `data`, with a hyperparameter optimization algorithm like TPE or any other algorithm supported by hyperopt [@pmlr-v28-bergstra13].
80+
81+
```python
82+
from hyperas import optim
83+
from hyperopt import Trials, STATUS_OK, tpe
84+
85+
best_run = optim.minimize(model=create_model,
86+
data=data,
87+
algo=tpe.suggest,
88+
max_evals=10,
89+
trials=Trials())
90+
```
91+
92+
Furthermore, note that hyperas can run [hyperparameter tuning in parrallel](https://github.com/maxpumperla/hyperas#running-hyperas-in-parallel), using hyperopt's distributed MongoDB backend.
93+
94+
# Acknowledgements
95+
96+
We would like to thank all the open-source contributors that helped making `hyperas` what it is today.
97+
It's a great honor to see your software continually used by the [community](https://github.com/maxpumperla/hyperas/network/dependents?package_id=UGFja2FnZS01MjIwODQ4OA%3D%3D).
98+
99+
# References

0 commit comments

Comments
 (0)