This is the Pytorch implementation of TrackMPNN for the KITTI and BDD100K multi-object tracking (MOT) datasets.
- Clone this repository
- Install Pipenv:
pip3 install pipenv
- Install all requirements and dependencies in a new virtual environment using Pipenv:
cd TrackMPNN
pipenv install
- Get link for desired PyTorch wheel from here and install it in the Pipenv virtual environment as follows:
pipenv install https://download.pytorch.org/whl/cu100/torch-1.2.0-cp36-cp36m-manylinux1_x86_64.whl
- Clone and make DCNv2 (note gcc-8 is highest supported version incase you're on ubuntu 20.04 +):
cd models/dla
git clone https://github.com/CharlesShang/DCNv2
cd DCNv2
./make.sh
- Download the imagenet pre-trained embedding network weights to the
TrackMPNN/weights
folder.
- Download and extract the KITTI multi-object tracking (MOT) dataset (including images, labels, and calibration files).
- Download the RRC and CenterTrack detections for both
training
andtesting
splits and add them to the KITTI MOT folder. The dataset should be organized as follows:
└── kitti-mot
├── training/
| └── calib/
| └── image_02/
| └── label_02/
| └── rrc_detections/
| └── centertrack_detections/
└── testing/
└── calib/
└── image_02/
└── rrc_detections/
└── centertrack_detections/
TrackMPNN can be trained using RRC detections as follows:
pipenv shell # activate virtual environment
python train.py --dataset=kitti --dataset-root-path=/path/to/kitti-mot/ --cur-win-size=5 --detections=rrc --feats=2d --category=Car --no-tp-classifier --epochs=30 --random-transforms
exit # exit virtual environment
TrackMPNN can also be trained using CenterTrack detections as follows:
pipenv shell # activate virtual environment
python train.py --dataset=kitti --dataset-root-path=/path/to/kitti-mot/ --cur-win-size=5 --detections=centertrack --feats=2d --category=All --no-tp-classifier --epochs=50 --random-transforms
exit # exit virtual environment
By default, the model is trained to track All
object categories, but you can supply the --category
argument with any one of the following options: ['Pedestrian', 'Car', 'Cyclist', 'All']
.
Inference on the testing
split can be carried out using this script as follows:
pipenv shell # activate virtual environment
python infer.py --snapshot=/path/to/snapshot.pth --dataset-root-path=/path/to/kitti-mot/ --hungarian
exit # exit virtual environment
All settings from training will be carried over for inference.
Config files, logs, results and snapshots from running the above scripts will be stored in the TrackMPNN/experiments
folder by default.
You can use the utils/visualize_mot.py
script to generate a video of the tracking results after running the inference script:
pipenv shell # activate virtual environment
python utils/visualize_mot.py /path/to/testing/inference/results /path/to/kitti-mot/testing/image_02
exit
The videos will be stored in the same folder as the inference results.
- Download and extract the BDD100K multi-object tracking (MOT) dataset (including images, labels, and calibration files).
- Download the HIN and Libra detections for
training
,validation
andtesting
splits and add them to the BDD100K MOT folder. The dataset should be organized as follows:
└── bdd100k-mot
├── training/
| └── calib/
| └── image_02/
| └── label_02/
| └── hin_detections/
| └── libra_detections/
├── validation/
| └── calib/
| └── image_02/
| └── label_02/
| └── hin_detections/
| └── libra_detections/
└── testing/
└── calib/
└── image_02/
└── hin_detections/
└── libra_detections/
TrackMPNN can be trained using HIN detections as follows:
pipenv shell # activate virtual environment
python train.py --dataset=bdd100k --dataset-root-path=/path/to/bdd100k-mot/ --cur-win-size=5 --detections=libra --feats=2d --category=All --no-tp-classifier --epochs=20 --random-transforms
exit # exit virtual environment
By default, the model is trained to track All
object categories, but you can supply the --category
argument with any one of the following options: ['pedestrian', 'rider', 'car', 'bus', 'truck', 'train', 'motorcycle', 'bicycle', 'All']
.
Inference on the testing
split can be carried out using this script as follows:
pipenv shell # activate virtual environment
python infer.py --snapshot=/path/to/snapshot.pth --dataset-root-path=/path/to/bdd100k-mot/ --hungarian
exit # exit virtual environment
All settings from training will be carried over for inference.
Config files, logs, results and snapshots from running the above scripts will be stored in the TrackMPNN/experiments
folder by default.
You can use the utils/visualize_mot.py
script to generate a video of the tracking results after running the inference script:
pipenv shell # activate virtual environment
python utils/visualize_mot.py /path/to/testing/inference/results /path/to/bdd100k-mot/testing/image_02
exit
The videos will be stored in the same folder as the inference results.