Skip to content

Commit

Permalink
Update readme for Panoptic nuScenes (nutonomy#669)
Browse files Browse the repository at this point in the history
  • Loading branch information
lubing-motional authored Oct 14, 2021
1 parent 168f259 commit fcc4162
Show file tree
Hide file tree
Showing 3 changed files with 59 additions and 21 deletions.
26 changes: 19 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Welcome to the devkit of the [nuScenes](https://www.nuscenes.org/nuscenes) and [
- [Getting started with nuImages](#getting-started-with-nuimages)
- [nuScenes](#nuscenes)
- [nuScenes setup](#nuscenes-setup)
- [nuScenes-panoptic](#nuscenes-panoptic)
- [Panoptic nuScenes](#panoptic-nuscenes)
- [nuScenes-lidarseg](#nuscenes-lidarseg)
- [Prediction challenge](#prediction-challenge)
- [CAN bus expansion](#can-bus-expansion)
Expand All @@ -23,8 +23,8 @@ Welcome to the devkit of the [nuScenes](https://www.nuscenes.org/nuscenes) and [
## Changelog
- Sep. 20, 2021: Devkit v1.1.9: Refactor tracking eval code for custom datasets with different classes.
- Sep. 17, 2021: Devkit v1.1.8: Add PAT metric to Panoptic nuScenes.
- Aug. 23, 2021: Devkit v1.1.7: Add more panoptic tracking metrics to nuScenes-panoptic code.
- Jul. 29, 2021: Devkit v1.1.6: nuScenes-panoptic v1.0 code, NeurIPS challenge announcement.
- Aug. 23, 2021: Devkit v1.1.7: Add more panoptic tracking metrics to Panoptic nuScenes code.
- Jul. 29, 2021: Devkit v1.1.6: Panoptic nuScenes v1.0 code, NeurIPS challenge announcement.
- Apr. 5, 2021: Devkit v1.1.3: Bug fixes and pip requirements.
- Nov. 23, 2020: Devkit v1.1.2: Release map-expansion v1.3 with lidar basemap.
- Nov. 9, 2020: Devkit v1.1.1: Lidarseg evaluation code, NeurIPS challenge announcement.
Expand Down Expand Up @@ -102,10 +102,10 @@ Eventually you should have the following folder structure:
```
If you want to use another folder, specify the `dataroot` parameter of the NuScenes class (see tutorial).

### nuScenes-panoptic
In August 2021 we published [nuScenes-panoptic](https://www.nuscenes.org/nuscenes) which contains the panoptic
labels of the point clouds for the approximately 40,000 keyframes in nuScenes.
To install nuScenes-panoptic, please follow these steps:
### Panoptic nuScenes
In August 2021 we published [Panoptic nuScenes](https://www.nuscenes.org/panoptic) which contains the panoptic labels
of the point clouds for the approximately 40,000 keyframes in nuScenes.
To install Panoptic nuScenes, please follow these steps:
- Download the dataset from the [Download page](https://www.nuscenes.org/download),
- Extract the `panoptic` and `v1.0-*` folders to your nuScenes root directory (e.g. `/data/sets/nuscenes/panoptic`, `/data/sets/nuscenes/v1.0-*`).
- Get the latest version of the nuscenes-devkit.
Expand Down Expand Up @@ -199,4 +199,16 @@ Please use the following citation when referencing [nuScenes or nuImages](https:
}
```

Please use the following citation when referencing
[Panoptic nuScenes or nuScenes-lidarseg](https://arxiv.org/abs/2109.03805):
```
@article{fong2021panoptic,
title={Panoptic nuScenes: A Large-Scale Benchmark for LiDAR Panoptic Segmentation and Tracking},
author={Fong, Whye Kit and Mohan, Rohit and Hurtado, Juana Valeria and Zhou, Lubing and Caesar, Holger and
Beijbom, Oscar and Valada, Abhinav},
journal={arXiv preprint arXiv:2109.03805},
year={2021}
}
```

![](https://www.nuscenes.org/public/images/nuscenes-example.png)
13 changes: 13 additions & 0 deletions python-sdk/nuscenes/eval/lidarseg/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@

## Overview
- [Introduction](#introduction)
- [Citation](#citation)
- [Participation](#participation)
- [Challenges](#challenges)
- [Submission rules](#submission-rules)
Expand All @@ -15,6 +16,18 @@
Here we define the lidar segmentation task on nuScenes.
The goal of this task is to predict the category of every point in a set of point clouds. There are 16 categories (10 foreground classes and 6 background classes).

## Citation
When using the dataset in your research, please cite [Panoptic nuScenes](https://arxiv.org/abs/2109.03805):
```
@article{fong2021panoptic,
title={Panoptic nuScenes: A Large-Scale Benchmark for LiDAR Panoptic Segmentation and Tracking},
author={Fong, Whye Kit and Mohan, Rohit and Hurtado, Juana Valeria and Zhou, Lubing and Caesar, Holger and
Beijbom, Oscar and Valada, Abhinav},
journal={arXiv preprint arXiv:2109.03805},
year={2021}
}
```

## Participation
The nuScenes lidarseg segmentation [evaluation server](https://eval.ai/web/challenges/challenge-page/720/overview) is open all year round for submission.
To participate in the challenge, please create an account at [EvalAI](https://eval.ai).
Expand Down
41 changes: 27 additions & 14 deletions python-sdk/nuscenes/eval/panoptic/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@

## Overview
- [Introduction](#introduction)
- [Citation](#citation)
- [Participation](#participation)
- [Challenges](#challenges)
- [Submission rules](#submission-rules)
Expand All @@ -19,14 +20,26 @@ While panoptic segmentation focuses on static frames, panoptic tracking addition
pixel-level associations over time. For both tasks, there are 16 categories (10 thing and 6 stuff classes). Refer to
the [Panoptic nuScenes paper](https://arxiv.org/pdf/2109.03805.pdf) for more details.

## Citation
When using the dataset in your research, please cite [Panoptic nuScenes](https://arxiv.org/abs/2109.03805):
```
@article{fong2021panoptic,
title={Panoptic nuScenes: A Large-Scale Benchmark for LiDAR Panoptic Segmentation and Tracking},
author={Fong, Whye Kit and Mohan, Rohit and Hurtado, Juana Valeria and Zhou, Lubing and Caesar, Holger and
Beijbom, Oscar and Valada, Abhinav},
journal={arXiv preprint arXiv:2109.03805},
year={2021}
}
```

## Participation
The nuScenes panoptic [evaluation server](https://eval.ai/web/challenges/challenge-page/1243/overview) is open all year
round for submission. Participants can choose to attend both panoptic segmentation and panoptic tracking tasks, or only
the panoptic segmentation task. To participate in the challenge, please create an account at [EvalAI](https://eval.ai).
Then upload your zipped result folder with the required [content](#results-format). After each challenge, the results
will be exported to the nuScenes [panoptic leaderboard](https://www.nuscenes.org/panoptic). This is the only way to
benchmark your method against the test dataset. We require that all participants send the following information to
nuScenes@motional.com after submitting their results on EvalAI:
The Panoptic nuScenes challenge [evaluation server](https://eval.ai/web/challenges/challenge-page/1243/overview) is
open all year round for submission. Participants can choose to attend both panoptic segmentation and panoptic tracking
tasks, or only the panoptic segmentation task. To participate in the challenge, please create an account at
[EvalAI](https://eval.ai). Then upload your zipped result folder with the required [content](#results-format). After
each challenge, the results will be exported to the [Panoptic nuScenes leaderboard](https://www.nuscenes.org/panoptic).
This is the only way to benchmark your method against the test dataset. We require that all participants send the
following information to nuScenes@motional.com after submitting their results on EvalAI:
- Team name
- Method name
- Authors
Expand All @@ -38,16 +51,16 @@ nuScenes@motional.com after submitting their results on EvalAI:

## Challenges
To allow users to benchmark the performance of their method against the community, we host a single
[panoptic leaderboard](https://www.nuscenes.org/panoptic) with filters for different task tracks all year round.
Additionally we organize a number of challenges at leading Computer Vision conference workshops. Users that submit
their results during the challenge period are eligible for awards. Any user that cannot attend the workshop (direct or
via a representative) will be excluded from the challenge, but will still be listed on the leaderboard.
[Panoptic nuScenes leaderboard](https://www.nuscenes.org/panoptic) with filters for different task tracks all year
round. Additionally we organize a number of challenges at leading Computer Vision conference workshops. Users that
submit their results during the challenge period are eligible for awards. Any user that cannot attend the workshop
(direct or via a representative) will be excluded from the challenge, but will still be listed on the leaderboard.

### 7th AI Driving Olympics, NeurIPS 2021
The first Panoptic nuScenes challenge will be held at [NeurIPS 2021](https://nips.cc/Conferences/2021/).
Submissions will be accepted from September, 2021. Results and winners will be announced at the
[7th AI Driving Olympics](https://driving-olympics.ai/) at NeurIPS 2021. For more information see the
[leaderboard](https://www.nuscenes.org/panoptic). Note that the
Submissions will be accepted from 1 September 2021. **The submission deadline is 1 December 2021, 12:00pm, noon, UTC.**
Results and winners will be announced at the [7th AI Driving Olympics](https://driving-olympics.ai/) at NeurIPS 2021.
For more information see the [leaderboard](https://www.nuscenes.org/panoptic). Note that the
[evaluation server](https://eval.ai/web/challenges/challenge-page/1243/overview) can still be used to benchmark your
results after the challenge.

Expand Down

0 comments on commit fcc4162

Please sign in to comment.