V2I-CALIB and V2I-CALIB++: Object-Level, Real-Time Point Cloud Global Registration Framework for V2I/V2X Applications
- An initial-value-free online calibration method for vehicle-road multi-end scenarios is proposed, based on perception objects;
- A new multi-end target association method is proposed, which fully explores spatial associations in the scene without positioning priors;
- oIoU and oDist both enable real-time monitoring of external parameters in the scene.
- [2024/09/13] V2I-CALIB++ is available here.
- [2024/06/30] V2I-CALIB is accepted by IROS 2024!
We conducted experiments comparing V2I-Calib and V2I-Calib++ against well-performed point cloud Global Registration methods, using two widely recognized V2X datasets: DAIR-V2X and V2X-Sim. The results are as follows.
This code is mainly developed under Ubuntu 20.04. We use anaconda3 with Python 3.8 as the base Python setup.
After cloning this repo, please run:
source setup.sh
To test the sample, simply run the following command:
python test.py --test_type single
For batch testing, additional data preparation is required. This process is also included in the test.py file.
Download DAIR-V2X-C dataset here and organize as follows:
# For DAIR-V2X-C Dataset located at ${DAIR-V2X-C_DATASET_ROOT}
├── cooperative-vehicle-infrastructure # DAIR-V2X-C
├── infrastructure-side # DAIR-V2X-C-I
├── velodyne
├── {id}.pcd
├── label
├── camera # Labeled data in Infrastructure Virtual LiDAR Coordinate System fitting objects in image based on image frame time
├── {id}.json
├── virtuallidar # Labeled data in Infrastructure Virtual LiDAR Coordinate System fitting objects in point cloud based on point cloud frame time
├── {id}.json
├── data_info.json # Relevant index information of Infrastructure data
├── vehicle-side # DAIR-V2X-C-V
├── velodyne
├── {id}.pcd
├── label
├── camera # Labeled data in Vehicle LiDAR Coordinate System fitting objects in image based on image frame time
├── {id}.json
├── lidar # Labeled data in Vehicle LiDAR Coordinate System fitting objects in point cloud based on point cloud frame time
├── {id}.json
├── data_info.json # Relevant index information of the Vehicle data
├── cooperative # Coopetative Files
├── label_world # Vehicle-Infrastructure Cooperative (VIC) Annotation files
├── {id}.json
├── calib
├── lidar_i2v # External Parameters from Infrastructure LiDAR to Vehicle LiDAR
├── {id}.json # Vehicle ID
├── data_info.json # Relevant index information combined the Infrastructure data and the Vehicle data
Note: cooperative-vehicle-infrastructure/cooperative/calib/lidar_i2v is generated by https://github.com/AIR-THU/DAIR-V2X/blob/main/tools/dataset_converter/calib_i2v.py
.
cd ${v2i-calib_root}/v2i-calib
mkdir ./data/DAIR-V2X
ln -s ${DAIR-V2X-C_DATASET_ROOT}/cooperative-vehicle-infrastructure ${v2i-calib_root}/v2i-calib/data/DAIR-V2X
python test.py --test_type batch
The results are detailed in Log/xx.log
. Execute Log/analyze.py
to analyze the batch test results. The final results will be available in analysis_results.csv
. You will find these results to be superior to those previously discussed or those presented in the paper :-)
The configuration parameters are located in config/config.yaml
. To use the oIoU metric, set core_similarity_component_list = [iou]
. To use the oDist metric, set core_similarity_component_list = [centerpoint_distance, vertex_distance]
This project is not possible without the following codebases.
If you find our work or this repo useful, please cite:
@article{qu2024v2i,
title={V2I-Calib: A Novel Calibration Approach for Collaborative Vehicle and Infrastructure LiDAR Systems},
author={Qu, Qianxin and Xiong, Yijin and Wu, Xin and Li, Hanyu and Guo, Shichun},
journal={arXiv preprint arXiv:2407.10195},
year={2024}
}
@article{qu2024v2iplus,
title={V2I-Calib++: A Multi-terminal Spatial Calibration Approach in Urban Intersections for Collaborative Perception},
author={Qu, Qianxin and Zhang, Xinyu and Xiong, Yijin and Guo, Shichun and Song, Ziqiang and Li, Jun},
journal={arXiv preprint arXiv:2410.11008},
year={2024}
}