We test on the KITTI-360 sequences
Check kitti360_visualize repo.
Baseline:
## copy example config
cd config
cp kitti360_wpose_example kitti360_wpose.py
## Modify config path
nano kitti360_wpose.py
cd ..
## Train
./launcher/train.sh configs/kitti360_wpose.py 0 $experiment_name
## Evaluation
python3 scripts/test.py configs/kitti360_wpose.py 0 $CHECKPOINT_PATH
It's fine to just use the baseline model for projects. After training baseline, you can further re-train with self-distillation:
## export checkpoint
python3 monodepth/transform_teacher.py $Pretrained_checkpoint $output_compressed_checkpoint
## copy example config
cd config
cp distill_kitti360_example distill_kitti360.py
## Modify config path and checkpoint path based on $output_compressed_checkpoint
nano distill_kitti360.py
cd ..
## Train
./launcher/train.sh configs/distill_kitti360.py 0 $experiment_name
Check demos/demo.ipynb for visualizing datasets and simple demos.
We support exporting pretrained model to onnx model, and you need to install onnx and onnxruntime.
python3 scripts/onnx_export.py $CONFIG_FILE $CHECKPOINT_PATH $ONNX_PATH
- Launch kitti360_visualize to stream image data topics and Rviz visualization.
- Launch monodepth_ros to infer on camera topics.