Skip to content

Commit

Permalink
update codes and documents to update OpenPCDet to v0.6
Browse files Browse the repository at this point in the history
  • Loading branch information
sshaoshuai committed Sep 2, 2022
1 parent 1a42e44 commit edb82b9
Show file tree
Hide file tree
Showing 8 changed files with 71 additions and 16 deletions.
10 changes: 8 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@

`OpenPCDet` is a clear, simple, self-contained open source project for LiDAR-based 3D object detection.

It is also the official code release of [`[PointRCNN]`](https://arxiv.org/abs/1812.04244), [`[Part-A2-Net]`](https://arxiv.org/abs/1907.03670), [`[PV-RCNN]`](https://arxiv.org/abs/1912.13192), [`[Voxel R-CNN]`](https://arxiv.org/abs/2012.15712) and [`[PV-RCNN++]`](https://arxiv.org/abs/2102.00463).
It is also the official code release of [`[PointRCNN]`](https://arxiv.org/abs/1812.04244), [`[Part-A2-Net]`](https://arxiv.org/abs/1907.03670), [`[PV-RCNN]`](https://arxiv.org/abs/1912.13192), [`[Voxel R-CNN]`](https://arxiv.org/abs/2012.15712), [`[PV-RCNN++]`](https://arxiv.org/abs/2102.00463) and [`[MPPNet]`](https://arxiv.org/abs/2205.05979).

**Highlights**:
* `OpenPCDet` has been updated to `v0.5.2` (Jan. 2022).
* `OpenPCDet` has been updated to `v0.6.0` (Sep. 2022).
* The codes of PV-RCNN++ has been supported.

## Overview
Expand All @@ -21,6 +21,12 @@ It is also the official code release of [`[PointRCNN]`](https://arxiv.org/abs/18


## Changelog
[2022-09-02] **NEW:** Update `OpenPCDet` to v0.6.0:
* Official code release of [MPPNet](https://arxiv.org/abs/2205.05979) for temporal 3D object detection, which supports long-term multi-frame 3D object detection and ranks 1st place on 3D detection learderboard of Waymo Open Dataset (see the [guideline](docs/guidelines_of_approaches/mppnet.md) on how to train/test with MPPNet).
* Support multi-frame training/testing on Waymo Open Dataset (see the [change log](docs/changelog.md) for more details on how to process data).
* Support to save changing training details (e.g., loss, iter, epoch) to file (previous tqdm progress bar is still supported by using `--use_tqdm_to_record`).
* Support to save latest model every 5 mintues, so you can restore the model training from latest status instead of previous epoch.

[2022-08-22] Added support for [custom dataset tutorial and template](docs/CUSTOM_DATASET_TUTORIAL.md)

[2022-07-05] Added support for the 3D object detection backbone network [`Focals Conv`](https://openaccess.thecvf.com/content/CVPR2022/papers/Chen_Focal_Sparse_Convolutional_Networks_for_3D_Object_Detection_CVPR_2022_paper.pdf).
Expand Down
15 changes: 12 additions & 3 deletions docs/GETTING_STARTED.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,11 +74,15 @@ OpenPCDet
| | |── waymo_processed_data_v0_5_0
│ │ │ │── segment-xxxxxxxx/
| | | |── ...
│ │ │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1/
│ │ │── waymo_processed_data_v0_5_0_waymo_dbinfos_train_sampled_1.pkl
│ │ │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1_global.npy (optional)
│ │ │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1/ (old, for single-frame)
│ │ │── waymo_processed_data_v0_5_0_waymo_dbinfos_train_sampled_1.pkl (old, for single-frame)
│ │ │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1_global.npy (optional, old, for single-frame)
│ │ │── waymo_processed_data_v0_5_0_infos_train.pkl (optional)
│ │ │── waymo_processed_data_v0_5_0_infos_val.pkl (optional)
| | |── waymo_processed_data_v0_5_0_gt_database_train_sampled_1_multiframe_-4_to_0 (new, for single/multi-frame)
│ │ │── waymo_processed_data_v0_5_0_waymo_dbinfos_train_sampled_1_multiframe_-4_to_0.pkl (new, for single/multi-frame)
│ │ │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1_multiframe_-4_to_0_global.np (new, for single/multi-frame)
├── pcdet
├── tools
```
Expand All @@ -92,8 +96,13 @@ pip3 install waymo-open-dataset-tf-2-5-0 --user
* Extract point cloud data from tfrecord and generate data infos by running the following command (it takes several hours,
and you could refer to `data/waymo/waymo_processed_data_v0_5_0` to see how many records that have been processed):
```python
# only for single-frame setting
python -m pcdet.datasets.waymo.waymo_dataset --func create_waymo_infos \
--cfg_file tools/cfgs/dataset_configs/waymo_dataset.yaml

# for single-frame or multi-frame setting
python -m pcdet.datasets.waymo.waymo_dataset --func create_waymo_infos \
--cfg_file tools/cfgs/dataset_configs/waymo_dataset_multiframe.yaml
# Ignore 'CUDA_ERROR_NO_DEVICE' error as this process does not require GPU.
```

Expand Down
41 changes: 41 additions & 0 deletions docs/changelog.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Changelog and Guidelines

### [2022-09-02] Update to v0.6.0:

* How to process data to support multi-frame training/testing on Waymo Open Dataset?
* If you never use the OpenPCDet, you can directly follow the [GETTING_STARTED.md](GETTING_STARTED.md)
* If you have been using previous OpenPCDet (`v0.5`), then you need to follow the following steps to update your data:
* Update your waymo infos (the `*.pkl` files for each sequence) by adding argument `--update_info_only`:
```
python -m pcdet.datasets.waymo.waymo_dataset --func create_waymo_infos --cfg_file tools/cfgs/dataset_configs/waymo_dataset.yaml --update_info_only
```
* Generate multi-frame GT database for copy-paste augmentation of multi-frame training
```
# There is also a faster version with parallel data generation by adding `--use_parallel`, but you need to read the codes and rename the file after getting the results
python -m pcdet.datasets.waymo.waymo_dataset --func create_waymo_gt_database --cfg_file tools/cfgs/dataset_configs/waymo_dataset_multiframe.yaml
```
This will generate the new files like the following (the last three lines under `data/waymo`):
```
OpenPCDet
├── data
│ ├── waymo
│ │ │── ImageSets
│ │ │── raw_data
│ │ │ │── segment-xxxxxxxx.tfrecord
| | | |── ...
| | |── waymo_processed_data_v0_5_0
│ │ │ │── segment-xxxxxxxx/
| | | |── ...
│ │ │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1/
│ │ │── waymo_processed_data_v0_5_0_waymo_dbinfos_train_sampled_1.pkl
│ │ │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1_global.npy (optional)
│ │ │── waymo_processed_data_v0_5_0_infos_train.pkl (optional)
│ │ │── waymo_processed_data_v0_5_0_infos_val.pkl (optional)
| | |── waymo_processed_data_v0_5_0_gt_database_train_sampled_1_multiframe_-4_to_0 (new)
│ │ │── waymo_processed_data_v0_5_0_waymo_dbinfos_train_sampled_1_multiframe_-4_to_0.pkl (new)
│ │ │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1_multiframe_-4_to_0_global.np (new, optional)

├── pcdet
├── tools
```
1 change: 1 addition & 0 deletions docs/guidelines_of_approaches/mppnet.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Will be available soon
11 changes: 5 additions & 6 deletions pcdet/datasets/waymo/waymo_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -351,7 +351,8 @@ def create_groundtruth_database(self, info_path, save_path, used_classes=None, s

if use_sequence_data:
st_frame, ed_frame = self.dataset_cfg.SEQUENCE_CONFIG.SAMPLE_OFFSET[0], self.dataset_cfg.SEQUENCE_CONFIG.SAMPLE_OFFSET[1]
st_frame = min(-4, st_frame) # at least we use 5 frames for generating gt database to support various sequence configs (<= 5 frames)
self.dataset_cfg.SEQUENCE_CONFIG.SAMPLE_OFFSET[0] = min(-4, st_frame) # at least we use 5 frames for generating gt database to support various sequence configs (<= 5 frames)
st_frame = self.dataset_cfg.SEQUENCE_CONFIG.SAMPLE_OFFSET[0]
database_save_path = save_path / ('%s_gt_database_%s_sampled_%d_multiframe_%s_to_%s' % (processed_data_tag, split, sampled_interval, st_frame, ed_frame))
db_info_save_path = save_path / ('%s_waymo_dbinfos_%s_sampled_%d_multiframe_%s_to_%s.pkl' % (processed_data_tag, split, sampled_interval, st_frame, ed_frame))
db_data_save_path = save_path / ('%s_gt_database_%s_sampled_%d_multiframe_%s_to_%s_global.npy' % (processed_data_tag, split, sampled_interval, st_frame, ed_frame))
Expand Down Expand Up @@ -547,12 +548,10 @@ def create_groundtruth_database_parallel(self, info_path, save_path, used_classe
st_frame = self.dataset_cfg.SEQUENCE_CONFIG.SAMPLE_OFFSET[0]
database_save_path = save_path / ('%s_gt_database_%s_sampled_%d_multiframe_%s_to_%s_%sparallel' % (processed_data_tag, split, sampled_interval, st_frame, ed_frame, 'tail_' if crop_gt_with_tail else ''))
db_info_save_path = save_path / ('%s_waymo_dbinfos_%s_sampled_%d_multiframe_%s_to_%s_%sparallel.pkl' % (processed_data_tag, split, sampled_interval, st_frame, ed_frame, 'tail_' if crop_gt_with_tail else ''))
db_data_save_path = save_path / ('%s_gt_database_%s_sampled_%d_multiframe_%s_to_%s_%sglobal_parallel.npy' % (processed_data_tag, split, sampled_interval, st_frame, ed_frame, 'tail_' if crop_gt_with_tail else ''))
else:
database_save_path = save_path / ('%s_gt_database_%s_sampled_%d_parallel' % (processed_data_tag, split, sampled_interval))
db_info_save_path = save_path / ('%s_waymo_dbinfos_%s_sampled_%d_parallel.pkl' % (processed_data_tag, split, sampled_interval))
db_data_save_path = save_path / ('%s_gt_database_%s_sampled_%d_global_parallel.npy' % (processed_data_tag, split, sampled_interval))


database_save_path.mkdir(parents=True, exist_ok=True)

with open(info_path, 'rb') as f:
Expand Down Expand Up @@ -670,7 +669,7 @@ def create_waymo_gt_database(
parser.add_argument('--processed_data_tag', type=str, default='waymo_processed_data_v0_5_0', help='')
parser.add_argument('--update_info_only', action='store_true', default=False, help='')
parser.add_argument('--use_parallel', action='store_true', default=False, help='')
parser.add_argument('--crop_gt_with_tail', action='store_true', default=False, help='')
parser.add_argument('--wo_crop_gt_with_tail', action='store_true', default=False, help='')

args = parser.parse_args()

Expand Down Expand Up @@ -706,7 +705,7 @@ def create_waymo_gt_database(
save_path=ROOT_DIR / 'data' / 'waymo',
processed_data_tag=args.processed_data_tag,
use_parallel=args.use_parallel,
crop_gt_with_tail=args.crop_gt_with_tail
crop_gt_with_tail=not args.wo_crop_gt_with_tail
)
else:
raise NotImplementedError
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ def write_version_to_file(version, target_file):


if __name__ == '__main__':
version = '0.5.2+%s' % get_git_commit_number()
version = '0.6.0+%s' % get_git_commit_number()
write_version_to_file(version, 'pcdet/version.py')

setup(
Expand Down
1 change: 0 additions & 1 deletion tools/cfgs/dataset_configs/waymo_dataset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,6 @@ POINT_FEATURE_ENCODING: {
DATA_PROCESSOR:
- NAME: mask_points_and_boxes_outside_range
REMOVE_OUTSIDE_BOXES: True
USE_CENTER_TO_FILTER: True

- NAME: shuffle_points
SHUFFLE_ENABLED: {
Expand Down
6 changes: 3 additions & 3 deletions tools/cfgs/dataset_configs/waymo_dataset_multiframe.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ DATA_AUGMENTOR:
DB_INFO_PATH:
- waymo_processed_data_v0_5_0_waymo_dbinfos_train_sampled_1_multiframe_-4_to_0.pkl

USE_SHARED_MEMORY: True # set it to True to speed up (it costs about 15GB shared memory)
USE_SHARED_MEMORY: False # set it to True to speed up (it costs about 50GB? shared memory)
DB_DATA_PATH:
- waymo_processed_data_v0_5_0_gt_database_train_sampled_1_multiframe_-4_to_0_global.npy

Expand Down Expand Up @@ -84,6 +84,6 @@ DATA_PROCESSOR:
VOXEL_SIZE: [0.1, 0.1, 0.15]
MAX_POINTS_PER_VOXEL: 5
MAX_NUMBER_OF_VOXELS: {
'train': 150000,
'test': 150000
'train': 180000,
'test': 400000
}

0 comments on commit edb82b9

Please sign in to comment.