We have setup a jupyter server for conveniently testing the inference power of our trained models. It requires SSH port forwarding to our GPU server
ssh -L 8080:localhost:8080 cse_username@gpu3.cse.iitk.ac.inAccess remote jupyter server locally using the following link http://127.0.0.1:8080/tree?token=6413766c434bd136944fcd0f429162a53eb7a7f0dc174e18
Then access the inference_demo.ipynb notebook. The folder contains our trained models saved as PyTorch(pth) checkpoints and sample images from the COCO test dataset.
The setup that we followed for building our development environment is as follows.
Dependencies
- Ubuntu >= 20.04
- CUDA >= 11.3
- pytorch==1.12.1
- torchvision=0.13.1
- mmcv==2.0.0rc4
- mmengine==0.7.3
Our implementation based on MMDetection==3.0.0rc6.
Step 0. Create Conda Environment
conda create --name cs776 python=3.8 -y
conda activate cs776Step 1. Install Pytorch
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorchStep 2. Install MMEngine and MMCV
pip install -r requirements.txtStep 3. Install CrossKD.
cd cs776-kd
pip install -v -e .
# "-v" means verbose, or more output
# "-e" means installing a project in editable mode,
# thus any local modifications made to the code will take effect without reinstallation.Step 4. Prepare dataset follow the official instructions.
Single GPU
python tools/train.py configs/crosskd/${CONFIG_FILE} [optional arguments]Multi GPU
CUDA_VISIBLE_DEVICES=x,x,x,x python tools/dist_train.sh \
configs/crosskd/${CONFIG_FILE} ${GPU_NUM} [optional arguments]python tools/test.py configs/crosskd/${CONFIG_FILE} ${CHECKPOINT_FILE}Licensed under a Creative Commons Attribution-NonCommercial 4.0 International for Non-commercial use only. Any commercial use should get formal permission first.
This repo is modified from open source object detection codebase MMDetection.