FPV-RCNN: Keypoints Based Deep Feature Fusion for Cooperative Vehicle Detectionof Autonomous Driving
This project is highly dependent on repo OpenPCDet. and SpConv.
- Download the [COMAP](https://seafile.cloud.uni-hannover.de/f/cb3e9e25646b4119a5e6/?dl=1](https://data.uni-hannover.de:8080/dataset/upload/users/ikg/yuan/cosense3d/COMAP/) dataset and extract all folders
- Download the pre-trained CIA-SSD checkpoint and store it in a new logging-path
Tested on ubuntu 16.04 and cuda 10.1
apt-get update -qq && apt-get install -y software-properties-common git nano
# for compiling spconv
apt-get install -y libboost-all-dev build-essential libssl-dev
# build python venv and install python packages
cd FPV_RCNN && python -m venv venv && source venv/bin/activate
pip install -r requirements.txt
# build spconv and ops
cd spconv && python setup.py bdist_wheel
cd ./dist && pip install $(basename $./*.whl)
cd ../.. && python setup.py develop
Configurations for dataset pre-processing, model, training and testing can all be found in the python file of folder cfg. To train the network with default settings, only the data path Dataset.root and the output logging path Optimization.PATHS['run'] should be set. The logging path should contain the pre-trained CIA-SSD checkpoint. For example, if you want to train fpvrcnn, please configure the fusion_pvrcnn_comap.py as follows:
# or
self.PATHS = {
'run': '/path/to/experiments_output/fusion-pvrcnn'
}
and then put the cia-ssd checkpoint in the folder experiments_output.
Pass the cfg file name (ex. "fusion_pvrcnn_comap") to the function cfg_from_py in the training or testing script, and run
tools/train_fusion_detector.py
# or
tools/test_fusion_detector.py
If you find this work useful in your research, please consider cite:
@ARTICLE{9682601,
author={Yuan, Yunshuang and Cheng, Hao and Sester, Monika},
journal={IEEE Robotics and Automation Letters},
title={Keypoints-Based Deep Feature Fusion for Cooperative Vehicle Detection of Autonomous Driving},
year={2022},
volume={7},
number={2},
pages={3054-3061},
doi={10.1109/LRA.2022.3143299}}
or cite:
@crticle{comap,
AUTHOR = {Yunshuang Yuan, Monika Sester},
TITLE = {{COMAP}: A SYNTHETIC DATASET FOR COLLECTIVE MULTI-AGENT PERCEPTION OF AUTONOMOUS DRIVING},
JOURNAL = {The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
VOLUME = {XLIII-B2-2021},
YEAR = {2021},
PAGES = {255--263},
URL = {https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B2-2021/255/2021/},
DOI = {10.5194/isprs-archives-XLIII-B2-2021-255-2021}
}
if you want use the COMAP dataset.