Tune is a scalable framework for hyperparameter search with a focus on deep learning and deep reinforcement learning.
You can find the code for Tune here on GitHub. To get started with Tune, try going through our tutorial of using Tune with Keras.
(Experimental): You can try out the above tutorial on a free hosted server via Binder.
- Supports any deep learning framework, including PyTorch, TensorFlow, and Keras.
- Choose among scalable hyperparameter and model search techniques such as:
- Mix and match different hyperparameter optimization approaches - such as using HyperOpt with HyperBand or Nevergrad with HyperBand.
- Visualize results with TensorBoard, parallel coordinates (Plot.ly), and rllab's VisKit.
- Scale to running on a large distributed cluster without changing your code.
- Parallelize training for models with GPU requirements or algorithms that may themselves be parallel and distributed, using Tune's resource-aware scheduling,
Take a look at the User Guide for a comprehensive overview on how to use Tune's features.
You'll need to first install ray to import Tune.
pip install ray # also recommended: ray[debug]
This example runs a small grid search over a neural network training function using Tune, reporting status on the command line until the stopping condition of mean_accuracy >= 99
is reached. Tune works with any deep learning framework.
Tune uses Ray as a backend, so we will first import and initialize Ray.
import ray
from ray import tune
ray.init()
For the function you wish to tune, pass in a reporter
object:
def train_func(config, reporter): # add a reporter arg
model = ( ... )
optimizer = SGD(model.parameters(),
momentum=config["momentum"])
dataset = ( ... )
for idx, (data, target) in enumerate(dataset):
accuracy = model.fit(data, target)
reporter(mean_accuracy=accuracy) # report metrics
Finally, configure your search and execute it on your Ray cluster:
all_trials = tune.run(
train_func,
name="quick-start",
stop={"mean_accuracy": 99},
config={"momentum": tune.grid_search([0.1, 0.2])}
)
Tune can be used anywhere Ray can, e.g. on your laptop with ray.init()
embedded in a Python script, or in an auto-scaling cluster for massive parallelism.
If Tune helps you in your academic research, you are encouraged to cite our paper. Here is an example bibtex:
@article{liaw2018tune,
title={Tune: A Research Platform for Distributed Model Selection and Training},
author={Liaw, Richard and Liang, Eric and Nishihara, Robert
and Moritz, Philipp and Gonzalez, Joseph E and Stoica, Ion},
journal={arXiv preprint arXiv:1807.05118},
year={2018}
}