Skip to content

khornlund/jigsaw

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Kaggle Competition: Jigsaw Unintended Bias in Toxicity Classification

https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification

  • Python >= 3.6
  • PyTorch >= 0.4
  • Clear folder structure which is suitable for many deep learning projects.
  • .json config file support for more convenient parameter tuning.
  • Checkpoint saving and resuming.
  • Abstract base classes for faster development: * BaseTrainer handles checkpoint saving/resuming, training process logging, and more. * BaseDataLoader handles batch generation, data shuffling, and validation data splitting. * BaseModel provides basic model summary.
cookiecutter-pytorch/
│
├── <project name>/
│    │
│    ├── cli.py - command line interface
│    ├── main.py - main script to start train/test
│    │
│    ├── base/ - abstract base classes
│    │   ├── base_data_loader.py - abstract base class for data loaders
│    │   ├── base_model.py - abstract base class for models
│    │   └── base_trainer.py - abstract base class for trainers
│    │
│    ├── data_loader/ - anything about data loading goes here
│    │   └── data_loaders.py
│    │
│    ├── model/ - models, losses, and metrics
│    │   ├── loss.py
│    │   ├── metric.py
│    │   └── model.py
│    │
│    ├── trainer/ - trainers
│    │   └── trainer.py
│    │
│    └── utils/
│        ├── util.py
│        ├── logger.py - class for train logging
│        ├── visualization.py - class for tensorboardX visualization support
│        └── ...
│
├── data/ - default directory for storing input data
│
├── experiments/ - default directory for storing configuration files
│
├── saved/ - default checkpoints folder
│   └── runs/ - default logdir for tensorboardX
$ conda create --name <name> python=3.6
$ pip install -e .
$ conda install pytorch torchvision cudatoolkit=10.0 -c pytorch

The code in this repo is an MNIST example of the template. You can run the tests, and the example project using:

$ pytest tests
$ project name train -c experiments/config.json

Config files are in .json format:

{
  "name": "Mnist_LeNet",        // training session name
  "n_gpu": 1,                   // number of GPUs to use for training.

  "arch": {
    "type": "MnistModel",       // name of model architecture to train
    "args": {

    }
  },
  "data_loader": {
    "type": "MnistDataLoader",         // selecting data loader
    "args":{
      "data_dir": "data/",             // dataset path
      "batch_size": 64,                // batch size
      "shuffle": true,                 // shuffle training data before splitting
      "validation_split": 0.1          // validation data ratio
      "num_workers": 2,                // number of cpu processes to be used for data loading
    }
  },
  "optimizer": {
    "type": "Adam",
    "args":{
      "lr": 0.001,                     // learning rate
      "weight_decay": 0,               // (optional) weight decay
      "amsgrad": true
    }
  },
  "loss": "nll_loss",                  // loss
  "metrics": [
    "my_metric", "my_metric2"          // list of metrics to evaluate
  ],
  "lr_scheduler": {
    "type": "StepLR",                   // learning rate scheduler
    "args":{
      "step_size": 50,
      "gamma": 0.1
    }
  },
  "trainer": {
    "epochs": 100,                     // number of training epochs
    "save_dir": "saved/",              // checkpoints are saved in save_dir/name
    "save_freq": 1,                    // save checkpoints every save_freq epochs
    "verbosity": 2,                    // 0: quiet, 1: per epoch, 2: full

    "monitor": "min val_loss"          // mode and metric for model performance monitoring. set 'off' to disable.
    "early_stop": 10                   // number of epochs to wait before early stop. set 0 to disable.

    "tensorboardX": true,              // enable tensorboardX visualization support
    "log_dir": "saved/runs"            // directory to save log files for visualization
  }
}

Add addional configurations if you need.

Modify the configurations in .json config files, then run:

python train.py --config experiments/config.json

You can resume from a previously saved checkpoint by:

python train.py --resume path/to/checkpoint

You can enable multi-GPU training by setting n_gpu argument of the config file to larger number. If configured to use smaller number of gpu than available, first n devices will be used by default. Specify indices of available GPUs by cuda environmental variable.

python train.py --device 2,3 -c experiments/config.json

This is equivalent to

CUDA_VISIBLE_DEVICES=2,3 python train.py -c config.py

Writing your own data loader

Inherit BaseDataLoader

BaseDataLoader is a subclass of torch.utils.data.DataLoader, you can use either of them.

BaseDataLoader handles: * Generating next batch * Data shuffling * Generating validation data loader by calling BaseDataLoader.split_validation()

DataLoader Usage

BaseDataLoader is an iterator, to iterate through batches:

for batch_idx, (x_batch, y_batch) in data_loader:
    pass

Example

Please refer to data_loader/data_loaders.py for an MNIST data loading example.

Writing your own trainer

Inherit BaseTrainer

BaseTrainer handles: 1. Training process logging 2. Checkpoint saving 3. Checkpoint resuming 4. Reconfigurable performance monitoring for saving current best model, and early stop training.

  1. If config monitor is set to max val_accuracy, which means then the trainer will save a
    checkpoint model_best.pth when validation accuracy of epoch replaces current maximum.
  2. If config early_stop is set, training will be automatically terminated when model
    performance does not improve for given number of epochs. This feature can be turned off by passing 0 to the early_stop option, or just deleting the line of config.
Implementing abstract methods

You need to implement _train_epoch() for your training process, if you need validation then you can implement _valid_epoch() as in trainer/trainer.py

Example

Please refer to trainer/trainer.py for MNIST training.

Writing your own model

Inherit BaseModel
BaseModel handles:
  • Inherited from torch.nn.Module
  • __str__: Modify native print function to prints the number of trainable parameters.
Implementing abstract methods

Implement the foward pass method forward()

Example

Please refer to model/model.py for a LeNet example.

Custom loss functions can be implemented in 'model/loss.py'. Use them by changing the name given in "loss" in config file, to corresponding name.

Metrics

Metric functions are located in model/metric.py.

You can monitor multiple metrics by providing a list in the configuration file, eg.

"metrics": ["my_metric", "my_metric2"]

If you have additional information to be logged, in _train_epoch() of your trainer class, merge them with log as shown below before returning:

additional_log = {"gradient_norm": g, "sensitivity": s}
log = {**log, **additional_log}
return log

You can test trained model by running test.py passing path to the trained checkpoint by --resume argument.

To split validation data from a data loader, call BaseDataLoader.split_validation(), it will return a validation data loader, with the number of samples according to the specified ratio in your config file.

Note: the split_validation() method will modify the original data loader Note: split_validation() will return None if "validation_split" is set to 0

You can specify the name of the training session in config files:

"name": "MNIST_LeNet"

The checkpoints will be saved in save_dir/name/timestamp/checkpoint_epoch_n, with timestamp in mmdd_HHMMSS format.

A copy of config file will be saved in the same folder.

Note: checkpoints contain:

{
  'arch': arch,
  'epoch': epoch,
  'logger': self.train_logger,
  'state_dict': self.model.state_dict(),
  'optimizer': self.optimizer.state_dict(),
  'monitor_best': self.mnt_best,
  'config': self.config
}

This template supports https://github.com/lanpa/tensorboardX visualization. * TensorboardX Usage

  1. Install

    Follow installation guide in https://github.com/lanpa/tensorboardX

  2. Run training

    Set tensorboardX option in config file true.

  3. Open tensorboard server

    Type tensorboard --logdir saved/runs/ at the project root, then server will open at http://localhost:6006

By default, values of loss and metrics specified in config file, input images, and histogram of model parameters will be logged. If you need more visualizations, use add_scalar('tag', data), add_image('tag', image), etc in the trainer._train_epoch method. add_something() methods in this template are basically wrappers for those of tensorboardX.SummaryWriter module.

Note: You don't have to specify current steps, since WriterTensorboardX class defined at logger/visualization.py will track current steps.

This template is inspired by

  1. https://github.com/victoresque/pytorch-template
  2. https://github.com/daemonslayer/cookiecutter-pytorch