Skip to content

Latest commit

 

History

History
83 lines (64 loc) · 3.65 KB

README.md

File metadata and controls

83 lines (64 loc) · 3.65 KB

ORFD: A Dataset and Benchmark for Off-Road Freespace Detection

Repository for our ICRA 2022 paper "ORFD: A Dataset and Benchmark for Off-Road Freespace Detection".

Introduction

Freespace detection is an essential component of autonomous driving technology and plays an important role in trajectory planning. In the last decade, deep learning based freespace detection methods have been proved feasible. However, these efforts were focused on urban road environments and few deep learning based methods were specifically designed for off-road freespace detection due to the lack of off-road dataset and benchmark. In this paper, we present the ORFD dataset, which, to our knowledge, is the first off-road freespace detection dataset. The dataset was collected in different scenes (woodland, farmland, grassland and countryside), ndifferent weather conditions (sunny, rainy, foggy and snowy) and different light conditions (bright light, daylight, twilight, darkness), which totally contains 12,198 LiDAR point cloud and RGB image pairs with the traversable area, non-traversable area and unreachable area annotated in detail. We propose a novel network named OFF-Net, which unifies Transformer architecture to aggregate local and global information, to meet the requirement of large receptive fields for freespace detection task. We also propose the cross-attention to dynamically fuse LiDAR and RGB image information for accurate off-road freespace detection.

demo 1

demo 2

demo 3

Requirements

Pretrained models

The pretrained models of our OFF-Net trained on ORFD dataset can be download here.

Prepare data

The proposed off-road freespace detection dataset ORFD can be found BaiduYun (code:1234, about 30GB) and Google Drive. Extract and organize as follows:

|-- datasets
 |  |-- ORFD
 |  |  |-- training
 |  |  |  |-- sequence   |-- calib
 |  |  |                 |-- sparse_depth
 |  |  |                 |-- dense_depth
 |  |  |                 |-- lidar_data
 |  |  |                 |-- image_data
 |  |  |                 |-- gt_image
 ......
 |  |  |-- validation
 ......
 |  |  |-- testing
 ......

The LiDAR coordinate system is x facing left, y facing forward, and z facing up. Generate depth map: python road_hesai40_process.py

Usage

Demo

bash ./scripts/demo.sh

Training

bash ./scripts/train.sh

Testing

bash ./scripts/test.sh

License

Our code and dataset are released under the Apache 2.0 license.

Acknowledgement

This repository is based on SNE-RoadSeg [1], and SegFormer [2].

References

[1] Fan, Rui, et al. "Sne-roadseg: Incorporating surface normal information into semantic segmentation for accurate freespace detection." European Conference on Computer Vision. Springer, Cham, 2020.

[2] Xie, Enze, et al. "SegFormer: Simple and efficient design for semantic segmentation with transformers." Advances in Neural Information Processing Systems 34 (2021).