You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-1Lines changed: 7 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,6 +22,10 @@ It is also the official code release of [`[PointRCNN]`](https://arxiv.org/abs/18
22
22
23
23
24
24
## Changelog
25
+
[2023-05-xx] Added support for the multi-modal 3D object detection model [`BEVFusion`](https://arxiv.org/abs/2205.13542) on Nuscenes dataset, which fuses multi-modal information on BEV space and reaches 70.98% NDS on Nuscenes validation dataset. (see the [guideline](docs/guidelines_of_approaches/bevfusion.md) on how to train/test with BEVFusion).
26
+
* Support multi-modal Nuscenes detection (See the [GETTING_STARTED.md](docs/GETTING_STARTED.md) to process data).
27
+
* Support TransFusion-Lidar head, which ahcieves 69.43% NDS on Nuscenes validation dataset.
28
+
25
29
[2023-04-02] Added support for [`VoxelNeXt`](https://github.com/dvlab-research/VoxelNeXt) on Nuscenes, Waymo, and Argoverse2 datasets. It is a fully sparse 3D object detection network, which is a clean sparse CNNs network and predicts 3D objects directly upon voxels.
26
30
27
31
[2022-09-02]**NEW:** Update `OpenPCDet` to v0.6.0:
@@ -199,7 +203,7 @@ We could not provide the above pretrained models due to [Waymo Dataset License A
199
203
but you could easily achieve similar performance by training with the default configs.
200
204
201
205
### NuScenes 3D Object Detection Baselines
202
-
All models are trained with 8 GTX 1080Ti GPUs and are available for download.
206
+
All models are trained with 8 GPUs and are available for download. For training BEVFusion, please refer to the [guideline](docs/guidelines_of_approaches/bevfusion.md).
The ckpt will be saved in ../output/nuscenes_models/cbgs_transfusion_lidar/default/ckpt.
17
+
18
+
1. To train BEVFusion, you need to download pretrained parameters for image backbone [here](www.google.com), and specify the path in [config](../../tools/cfgs/nuscenes_models/cbgs_bevfusion.yaml#L88). Then run the following command:
0 commit comments