This repository contains the code (in PyTorch) for "Follow the Footprints: Self-supervised Traversability Estimation for Off-road Vehicle Navigation based on Geometric and Visual Cues" paper (ICRA 2024).
- CUDA 11.3
- cuDNN 8
- Ubuntu 20.04
conda create -n ftfoot python=3.8
conda activate ftfoot
pip install torch==1.7.0+cu110 torchvision==0.8.1+cu110 torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
Modify the torch-encoding script by referring to this:
cd anaconda3/envs/ftfoot/lib/python3.8
# this path can be different, depends on your environment
fix the code in
site-packages/encoding/nn/syncbn.py
at about line 200
from
return syncbatchnorm(........).view(input_shape)
to
x, _, _=syncbatchnorm(........)
x=x.view(input_shape)
return x
fix the code
site-packages/encoding/functions/syncbn.py
at about line 102
from
ctx.save_for_backward(x,_ex,_exs,gamma,beta)
return y
to
ctx.save_for_backward(x,_ex,_exs,gamma,beta)
ctx.mark_non_differentiable(running_mean,running_var)
return y,running_mean,running_var
fix the code
site-packages/encoding/functions/syncbn.py
at about line 109
from
def backward(ctx,dz):
to
def backward(ctx,dz,_druning_mean,_druning_var):
cd exts
python setup.py install
- Download RELLIS-3D dataset. The folder structure is as follows.
RELLIS-3D
├── Rellis-3D
| ├── 00000
| | ├── os1_cloud_node_kitti_bin
| | ├── pylon_camera_node
| | ├── calib.txt
| | ├── camera_info.txt
| | └── poses.txt
| ├── 00001
| └── ..
└── Rellis_3D
├── 00000
| └── transforms.yaml
├── 00001
└── ..
- Prepare the data for training. Run:
sh ./data_prep/rellis_preproc.sh
- The final folder structure is as follows.
RELLIS-3D
├── Rellis-3D
├── Rellis_3D
└── Rellis-3D-custom
├── 00000
| ├── foot_print
| ├── super_pixel # This is optional, but recommended for clear output!
| └── surface_normal
├── 00001
└── ..
- Download ORFD dataset. The folder structure is as follows.
ORFD
└── Final_Dataset
├── training
| ├── calib
| ├── dense_depth
| ├── gt_image
| ├── image_data
| ├── lidar_data
| └── sparse_depth
├── validation
└── testing
- This dataset has no pose data. Therefore, we need to estimate the pose data from the point cloud. We used PyICP-SLAM. Place the pose data under the directory.
ORFD
├── Final_Dataset
└── ORFD-custom
├── training
| └── pose
| └── pose_16197787.csv
├── validation
└── testing
- Prepare the data for training. Run:
sh ./data_prep/orfd_preproc.sh
- The final folder structure is as follows.
ORFD
├── Final_Dataset
└── ORFD-custom
├── training
| ├── foot_print
| ├── pose
| ├── super_pixel
| └── surface_normal
├── validation
└── testing
Set data_config/data_root
in the train.yaml
file and run:
python train.py configs/train.yaml
Set data_config/data_root
and resume_path
in the test.yaml
file and run:
python test.py configs/test.yaml
python ./plot_map/plot_map_rellis.py \
--start_num 400 --end_num 900 \
--save_rgb_img --save_valid_map \
--cost_path ../outputs/prediction/your-ckpt-name
python ./path_plan/path_plan.py \
--start_num 400 --end_num 900 \
--local_planner_type TRRTSTAR \
--max_path_iter 1000 --max_extend_length 10 --bias_sampling \
--cost_map_name /path/to/your-ckpt-name.png
- Our implementation of GFL is based on https://github.com/kakaxi314/GuideNet
- Our implementation of FSM is based on https://github.com/panzhiyi/URSS
If you have any questions, please contact Yurim Jeon at yurimjeon1892@gmail.com