Skip to content

Commit

Permalink
Add that participants need to submit certain info for display on lead…
Browse files Browse the repository at this point in the history
…erboard, add lidarseg challenge deadline (nutonomy#503)

* Initial commit

* Link to validate_submission.py
  • Loading branch information
whyekit-motional authored Nov 25, 2020
1 parent 7222ced commit 2cea2a2
Show file tree
Hide file tree
Showing 4 changed files with 39 additions and 3 deletions.
9 changes: 9 additions & 0 deletions python-sdk/nuscenes/eval/detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,15 @@ To participate in the challenge, please create an account at [EvalAI](https://ev
Then upload your zipped result file including all of the required [meta data](#results-format).
After each challenge, the results will be exported to the nuScenes [leaderboard](https://www.nuscenes.org/object-detection) shown above.
This is the only way to benchmark your method against the test dataset.
We require that all participants send the following information to nuScenes@motional.com after submitting their results on EvalAI:
- Team name
- Method name
- Authors
- Affiliations
- Method description (5+ sentences)
- Project URL
- Paper URL
- FPS in Hz (and the hardware used to measure it)

## Challenges
To allow users to benchmark the performance of their method against the community, we host a single [leaderboard](https://www.nuscenes.org/object-detection) all-year round.
Expand Down
15 changes: 12 additions & 3 deletions python-sdk/nuscenes/eval/lidarseg/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,15 @@ To participate in the challenge, please create an account at [EvalAI](https://ev
Then upload your zipped result folder with the required [content](#results-format).
After each challenge, the results will be exported to the nuScenes [leaderboard](https://www.nuscenes.org/lidar-segmentation).
This is the only way to benchmark your method against the test dataset.
We require that all participants send the following information to nuScenes@motional.com after submitting their results on EvalAI:
- Team name
- Method name
- Authors
- Affiliations
- Method description (5+ sentences)
- Project URL
- Paper URL
- FPS in Hz (and the hardware used to measure it)

## Challenges
To allow users to benchmark the performance of their method against the community, we host a single [leaderboard](https://www.nuscenes.org/lidar-segmentation) all-year round.
Expand All @@ -32,7 +41,7 @@ Click [here](https://eval.ai/web/challenges/challenge-page/720/overview) for the

### 5th AI Driving Olympics, NeurIPS 2020
The first nuScenes lidar segmentation challenge will be held at [NeurIPS 2020](https://nips.cc/Conferences/2020/).
Submission will open on Nov 15, 2020 and close in early Dec, 2020.
Submission will open on Nov 15, 2020 and close on 8 Dec, 2020.
Results and winners will be announced at the [5th AI Driving Olympics](https://driving-olympics.ai/) at NeurIPS 2020.
For more information see the [leaderboard](https://www.nuscenes.org/lidar-segmentation).
Note that the [evaluation server](https://eval.ai/web/challenges/challenge-page/720/overview) can still be used to benchmark your results.
Expand Down Expand Up @@ -96,7 +105,7 @@ The contents of the `submision.json` file and `test` folder are defined below:
For the train and val sets, the evaluation can be performed by the user on their local machine.
For the test set, the user needs to zip the results folder and submit it to the official evaluation server.

For convenience, a `validate_submission.py` script has been provided to check that a given results folder is of the correct format.
For convenience, a `validate_submission.py` [script](https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/nuscenes/eval/lidarseg/validate_submission.py) has been provided to check that a given results folder is of the correct format.

Note that the lidar segmentation classes differ from the general nuScenes classes, as detailed below.

Expand Down Expand Up @@ -158,7 +167,7 @@ We use the well-known IOU metric, which is defined as TP / (TP + FP + FN).
The IOU score is calculated separately for each class, and then the mean is computed across classes.
Note that lidar segmentation index 0 is ignored in the calculation.

### Frequency-weighted IOU (FWIOU)
### Frequency-weighted IOU (fwIOU)
Instead of taking the mean of the IOUs across all the classes, each IOU is weighted by the point-level frequency of its class.
Note that lidar segmentation index 0 is ignored in the calculation.
FWIOU is not used for the challenge.
Expand Down
9 changes: 9 additions & 0 deletions python-sdk/nuscenes/eval/prediction/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,15 @@ To participate in the challenge, please create an account at [EvalAI](https://ev
Then upload your zipped result file including all of the required [meta data](#results-format).
After each challenge, the results will be exported to the nuScenes [leaderboard](https://www.nuscenes.org/prediction) shown above.
This is the only way to benchmark your method against the test dataset.
We require that all participants send the following information to nuScenes@motional.com after submitting their results on EvalAI:
- Team name
- Method name
- Authors
- Affiliations
- Method description (5+ sentences)
- Project URL
- Paper URL
- FPS in Hz (and the hardware used to measure it)

## Challenges
To allow users to benchmark the performance of their method against the community, we will host a single leaderboard all year round.
Expand Down
9 changes: 9 additions & 0 deletions python-sdk/nuscenes/eval/tracking/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,15 @@ To participate in the challenge, please create an account at [EvalAI](https://ev
Then upload your zipped result file including all of the required [meta data](#results-format).
The results will be exported to the nuScenes leaderboard shown above (coming soon).
This is the only way to benchmark your method against the test dataset.
We require that all participants send the following information to nuScenes@motional.com after submitting their results on EvalAI:
- Team name
- Method name
- Authors
- Affiliations
- Method description (5+ sentences)
- Project URL
- Paper URL
- FPS in Hz (and the hardware used to measure it)

## Challenges
To allow users to benchmark the performance of their method against the community, we host a single [leaderboard](#leaderboard) all-year round.
Expand Down

0 comments on commit 2cea2a2

Please sign in to comment.