Skip to content

Commit 1b507aa

Browse files
Update readme to include link to full doc and paper
1 parent 1e54126 commit 1b507aa

File tree

1 file changed

+18
-7
lines changed

1 file changed

+18
-7
lines changed

README.md

Lines changed: 18 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,20 @@
1-
# Quick Start Guide
1+
# DeepOBS - A Deep Learning Optimizer Benchmark Suite
22

3-
## Install Deep OBS
3+
DeepOBS is a benchmarking suite that drastically simplifies, automates and improves the evaluation of deep learning optimizers.
4+
5+
It can evaluate the performance of new optimizers on a variety of **real-world test problems** and automatically compare them with **realistic baselines**.
6+
7+
The full documentation is available on readthedocs: https://deepobs-iclr.readthedocs.io/
8+
9+
The paper describing DeepOBS is currently under review for ICLR 2019:
10+
https://openreview.net/forum?id=rJg6ssC5Y7
11+
12+
## Quick Start Guide
13+
14+
### Install Deep OBS
415
pip install git+https://github.com/anonymousICLR2019submitter/DeepOBS.git
516

6-
## Download the data
17+
### Download the data
718
deepobs_prepare_data.sh
819

920
This will automatically download, sort and prepare all the datasets (except ImageNet). It can take a while, as it will download roughly 1 GB.
@@ -18,7 +29,7 @@ to run SGD on a simple multi-layer perceptron (with a learning rate of 1e-1 and
1829

1930
Of course, the real value of a benchmark lies in evaluating new optimizers:
2031

21-
## Download and edit a run script
32+
### Download and edit a run script
2233
You can download a template run script from there
2334

2435
https://github.com/anonymousICLR2019submitter/DeepOBS/blob/master/scripts/deepobs_run_sgd.py
@@ -35,7 +46,7 @@ Let's assume that we want to benchmark the RMSProp optimizer. Then we only have
3546

3647
Usually the hyperparameters of the optimizers need to be included as well, but for now let's only take the learning rate as a hyperparameter for RMSProp (and if you want change all the 'sgd's in the comments to 'rmsprop'). Let's name this run script now deepobs_run_rmsprop.py
3748

38-
## Run your optimizer
49+
### Run your optimizer
3950
You can now run your optimizer on a test problem. Let's try it on a noisy quadratic problem:
4051

4152
python deepobs_run_rmsprop.py quadratic.noisy_quadratic --num_epochs=100 --lr=1e-1 --bs=128 --pickle --run_name=RMSProp_1e-1/
@@ -66,7 +77,7 @@ Usually the hyperparameters of the optimizers need to be included as well, but f
6677
python deepobs_run_rmsprop.py mnist.mnist_mlp --num_epochs=5 --lr=1e-2 --bs=128 --pickle --run_name=RMSProp_1e-2/ --random_seed=44
6778

6879

69-
## Plot Results
80+
### Plot Results
7081
Now we can plot the results of those two "new" optimizers "RMSProp_1e-1" and "RMSProp_1e-2". Since the performance is always relative, we automatically plot the performance against the most popular optimizers (SGD, Momentum, Adam) with the best settings we found after tuning their hyperparameters. Try out:
7182

7283
deepobs_plot_results.py --results_dir=results --log
@@ -75,7 +86,7 @@ Usually the hyperparameters of the optimizers need to be included as well, but f
7586
Additionally it will print out a table summarizing the performances over all test problems (here we only have one or two).
7687
If you add the option --saveto=save_dir the plots and a color coded table are saved as .png and ready-to-include .tex-files!
7788

78-
## Estimate runtime overhead
89+
### Estimate runtime overhead
7990
You can estimate the runtime overhead of the new optimizers compared to SGD like this:
8091

8192
deepobs_estimate_runtime.py deepobs_run_rmsprop.py --optimizer_arguments=--lr=1e-2

0 commit comments

Comments
 (0)