Skip to content

Commit

Permalink
Add legacy challenge results, clarify rules (nutonomy#513)
Browse files Browse the repository at this point in the history
Co-authored-by: Holger Caesar <holger.caesar@motional.com>
  • Loading branch information
holger-motional and Holger Caesar authored Dec 9, 2020
1 parent 5f337c5 commit c23f4bf
Show file tree
Hide file tree
Showing 4 changed files with 56 additions and 10 deletions.
30 changes: 27 additions & 3 deletions python-sdk/nuscenes/eval/detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Click [here](https://eval.ai/web/challenges/challenge-page/356/overview) for the

### 5th AI Driving Olympics, NeurIPS 2020
The third nuScenes detection challenge will be held at [NeurIPS 2020](https://nips.cc/Conferences/2020/).
Submission will open on Nov 15, 2020 and close in early Dec, 2020.
Submissions will be accepted from November 1 to December 8, 2020.
Results and winners will be announced at the [5th AI Driving Olympics](https://driving-olympics.ai/) at NeurIPS 2020.
Note that this challenge uses the same [evaluation server](https://eval.ai/web/challenges/challenge-page/356/overview) as previous detection challenges.

Expand All @@ -52,13 +52,36 @@ The submission period will open April 1 and continue until May 28th, 2020.
Results and winners will be announced at the [Workshop on Benchmarking Progress in Autonomous Driving](http://montrealrobotics.ca/driving-benchmarks/).
Note that the previous [evaluation server](https://eval.ai/web/challenges/challenge-page/356/overview) can still be used to benchmark your results after the challenge period.

A summary of the results can be seen below.
For details, please refer to the [detection leaderboard](https://www.nuscenes.org/object-detection).

| Rank | Team name | NDS |
|--- |--- |--- |
| 1 | Noah CV Lab fusion | 69.7% |
| 2 | CenterPoint | 67.5% |
| 3 | CVCNet ensemble | 66.6% |
| 4 | PanoNet3D | 63.1% |
| 5 | CRIPAC | 63.2% |
| 6 | SSN | 61.6% |

### Workshop on Autonomous Driving, CVPR 2019
The first nuScenes detection challenge was held at CVPR 2019.
Submission opened May 6 and closed June 12, 2019.
Results and winners were announced at the Workshop on Autonomous Driving ([WAD](https://sites.google.com/view/wad2019)) at [CVPR 2019](http://cvpr2019.thecvf.com/).
For more information see the [leaderboard](https://www.nuscenes.org/object-detection).
Note that the [evaluation server](https://eval.ai/web/challenges/challenge-page/356/overview) can still be used to benchmark your results.

A summary of the results can be seen below.
For details, please refer to the [detection leaderboard](https://www.nuscenes.org/object-detection).

| Rank | Team name | NDS |
|--- |--- |--- |
| 1 | MEGVII G3D3 | 63.3% |
| 2 | Tolist | 54.5% |
| 3 | SARPNET AT3D | 48.4% |
| 4 | MAIR | 38.4% |
| 5 | VIPL | 35.3% |

## Submission rules
### Detection-specific rules
* The maximum time window of past sensor data and ego poses that may be used at inference time is approximately 0.5s (at most 6 *past* camera images, 6 *past* radar sweeps and 10 *past* lidar sweeps). At training time there are no restrictions.
Expand All @@ -71,8 +94,9 @@ Note that the [evaluation server](https://eval.ai/web/challenges/challenge-page/
* Users must limit the number of submitted boxes per sample to 500.
* Every submission provides method information. We encourage publishing code, but do not make it a requirement.
* Top leaderboard entries and their papers will be manually reviewed.
* Each user or team can have at most one one account on the evaluation server.
* Each user or team can submit at most 3 results. These results must come from different models, rather than submitting results from the same model at different training epochs or with slightly different parameters.
* Each user or team can have at most one account *per year* on the evaluation server. Users that create multiple accounts to circumvent the rules will be excluded from the competition.
* Each user or team can submit at most three results *per year*. These results must come from different models, rather than submitting results from the same model at different training epochs or with slightly different parameters.
* Faulty submissions that return an error on Eval AI do not count towards the submission limit.
* Any attempt to circumvent these rules will result in a permanent ban of the team or company from all nuScenes challenges.

## Results format
Expand Down
7 changes: 4 additions & 3 deletions python-sdk/nuscenes/eval/lidarseg/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ Click [here](https://eval.ai/web/challenges/challenge-page/720/overview) for the

### 5th AI Driving Olympics, NeurIPS 2020
The first nuScenes lidar segmentation challenge will be held at [NeurIPS 2020](https://nips.cc/Conferences/2020/).
Submission will open on Nov 15, 2020 and close on 8 Dec, 2020.
Submissions will be accepted from November 1 to December 8, 2020.
Results and winners will be announced at the [5th AI Driving Olympics](https://driving-olympics.ai/) at NeurIPS 2020.
For more information see the [leaderboard](https://www.nuscenes.org/lidar-segmentation).
Note that the [evaluation server](https://eval.ai/web/challenges/challenge-page/720/overview) can still be used to benchmark your results.
Expand All @@ -56,8 +56,9 @@ Note that the [evaluation server](https://eval.ai/web/challenges/challenge-page/
* Users make predictions on the test set and submit the results to our evaluation server, which returns the metrics listed below.
* Every submission provides method information. We encourage publishing code, but do not make it a requirement.
* Top leaderboard entries and their papers will be manually reviewed.
* Each user or team can have at most one one account on the evaluation server.
* Each user or team can submit at most 3 results. These results must come from different models, rather than submitting results from the same model at different training epochs or with slightly different parameters.
* Each user or team can have at most one account *per year* on the evaluation server. Users that create multiple accounts to circumvent the rules will be excluded from the competition.
* Each user or team can submit at most three results *per year*. These results must come from different models, rather than submitting results from the same model at different training epochs or with slightly different parameters.
* Faulty submissions that return an error on Eval AI do not count towards the submission limit.
* Any attempt to circumvent these rules will result in a permanent ban of the team or company from all nuScenes challenges.

## Results format
Expand Down
14 changes: 12 additions & 2 deletions python-sdk/nuscenes/eval/prediction/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,15 @@ Note that the evaluation server can still be used to benchmark your results afte
*Update:* Due to the COVID-19 situation, participants are **not** required to attend in person
to be eligible for the prizes.

A summary of the results can be seen below.
For details, please refer to the [prediction leaderboard](https://www.nuscenes.org/prediction).

| Rank | Team name | minADE_5 |
|--- |--- |--- |
| 1 | cxx | 1.630 |
| 2 | MHA-JAM | 1.813 |
| 3 | Trajectron++ | 1.877 |

## Submission rules
### Prediction-specific rules
* The user can submit up to 25 proposed future trajectories, called `modes`, for each agent along with a probability the agent follows that proposal. Our metrics (explained below) will measure how well this proposed set of trajectories matches the ground truth.
Expand All @@ -57,8 +66,9 @@ to be eligible for the prizes.
from the training set called the `train_val` set.
* We release sensor data for train, val and test set.
* Top leaderboard entries and their papers will be manually reviewed to ensure no cheating was done.
* Each user or team can have at most one one account on the evaluation server.
* Each user or team can submit at most 3 results. These results must come from different models, rather than submitting results from the same model at different training epochs or with slightly weights or hyperparameter values.
* Each user or team can have at most one account *per year* on the evaluation server. Users that create multiple accounts to circumvent the rules will be excluded from the competition.
* Each user or team can submit at most three results *per year*. These results must come from different models, rather than submitting results from the same model at different training epochs or with slightly different parameters.
* Faulty submissions that return an error on Eval AI do not count towards the submission limit.
* Any attempt to make more submissions than allowed will result in a permanent ban of the team or company from all nuScenes challenges.

## Results format
Expand Down
15 changes: 13 additions & 2 deletions python-sdk/nuscenes/eval/tracking/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,16 @@ Submission will open October 1 and close December 9.
The leaderboard will remain private until the end of the challenge.
Results and winners will be announced at the [AI Driving Olympics](http://www.driving-olympics.ai/) Workshop (AIDO) at NIPS 2019.

A summary of the results can be seen below.
For details, please refer to the [tracking leaderboard](https://www.nuscenes.org/tracking).

| Rank | Team name | AMOTA |
|--- |--- |--- |
| 1 | StanfordIPRL-TRI | 55.0% |
| 2 | VV_team | 37.1% |
| 3 | CenterTrack-Open | 10.8% |
| 4 | CenterTrack-Vision | 4.6% |

## Submission rules
### Tracking-specific rules
* We perform 3D Multi Object Tracking (MOT) as in \[2\], rather than 2D MOT as in KITTI \[4\].
Expand All @@ -82,8 +92,9 @@ Results and winners will be announced at the [AI Driving Olympics](http://www.dr
* Users must limit the number of submitted boxes per sample to 500.
* Every submission provides method information. We encourage publishing code, but do not make it a requirement.
* Top leaderboard entries and their papers will be manually reviewed.
* Each user or team can have at most one account on the evaluation server.
* Each user or team can submit at most 3 results. These results must come from different models, rather than submitting results from the same model at different training epochs or with slightly different parameters.
* Each user or team can have at most one account *per year* on the evaluation server. Users that create multiple accounts to circumvent the rules will be excluded from the competition.
* Each user or team can submit at most three results *per year*. These results must come from different models, rather than submitting results from the same model at different training epochs or with slightly different parameters.
* Faulty submissions that return an error on Eval AI do not count towards the submission limit.
* Any attempt to circumvent these rules will result in a permanent ban of the team or company from all nuScenes challenges.

## Results format
Expand Down

0 comments on commit c23f4bf

Please sign in to comment.