Framework for reproducible and trackable Machine-Learning experiments. This project's aim is to allow independent coding of ML models, data loaders, progress tracking and hyperparameter optimization
git clone https://github.com/ctrl-q/pass-the-torch
cd pass-the-torch
pip install -r requirements.txt
- The base classes for all models are defined in models/base.py
- The other files in that folder are provided as examples
All possible experiments are stored in the experiments folder.
Some hyperparameters are common to all experiments and some are particular to an experiment
All hyperparameters except for datapath
and trials
can be specified as:
- a value
- a tuple of 2 values, which will be interpreted as a range*
- a list of multiple values, which will be interpreted as a discrete list of choices*
All hyperparams that are lists or tuples will be tuned via scikit-optimize for trials
iterations
* Please quote tuples or lists in the command line. e.g. --lr (0.001, 0.1) -> --lr '(0.001, 0.1)'
Training progress will be available via tensorboardX
Can be used to store any code for preparing your data for training, e.g. for dataloaders, logging, or anything else you could think of.
- Add your own model to the models folder. The model should subclass
PyTorchModel
orSKLearnModel
- Choose an experiment from the experiments folder
- Run
"python3 -m experiments.<experiment_name> -h"
to get the list of hyperparameters* - Run
"python3 -m experiments.<experiment_name>"
with hyperparams specified**
The experiments will be saved in the following path: <experiment name>/<hyphen-separated hyperparams in the same order as in the argparse>
* All experiments must be run from this folder, and not from the experiments folder
** The double quotes are needed