You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+18-7Lines changed: 18 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,20 @@
1
-
# Quick Start Guide
1
+
# DeepOBS - A Deep Learning Optimizer Benchmark Suite
2
2
3
-
## Install Deep OBS
3
+
DeepOBS is a benchmarking suite that drastically simplifies, automates and improves the evaluation of deep learning optimizers.
4
+
5
+
It can evaluate the performance of new optimizers on a variety of **real-world test problems** and automatically compare them with **realistic baselines**.
6
+
7
+
The full documentation is available on readthedocs: https://deepobs-iclr.readthedocs.io/
8
+
9
+
The paper describing DeepOBS is currently under review for ICLR 2019:
@@ -35,7 +46,7 @@ Let's assume that we want to benchmark the RMSProp optimizer. Then we only have
35
46
36
47
Usually the hyperparameters of the optimizers need to be included as well, but for now let's only take the learning rate as a hyperparameter for RMSProp (and if you want change all the 'sgd's in the comments to 'rmsprop'). Let's name this run script now deepobs_run_rmsprop.py
37
48
38
-
## Run your optimizer
49
+
###Run your optimizer
39
50
You can now run your optimizer on a test problem. Let's try it on a noisy quadratic problem:
Now we can plot the results of those two "new" optimizers "RMSProp_1e-1" and "RMSProp_1e-2". Since the performance is always relative, we automatically plot the performance against the most popular optimizers (SGD, Momentum, Adam) with the best settings we found after tuning their hyperparameters. Try out:
0 commit comments