Skip to content

Commit

Permalink
nuScenes challenge rules, eval server link and more robust error hand…
Browse files Browse the repository at this point in the history
…ling (#141)

* Improved error messages

* Improved error messages

* Compute ap over all classes, more comments

* Add assertion to check for old format

* Added more clarifications to the challenges

* Added link to EvalAI
  • Loading branch information
holger-motional authored May 7, 2019
1 parent 08e52c0 commit 6761b7f
Show file tree
Hide file tree
Showing 6 changed files with 59 additions and 24 deletions.
1 change: 0 additions & 1 deletion faqs.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# Frequently asked questions

On this page we try to answer questions frequently asked by our users.

- How can I get in contact?
Expand Down
15 changes: 14 additions & 1 deletion python-sdk/nuscenes/eval/detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,19 @@ The goal of this task is to place a 3D bounding box around 10 different object c
as well as estimating a set of attributes and the current velocity vector.

## Challenges
To allow users to benchmark the performance of their method against the community, we host a single [leaderboard](#leaderboard) all-year round.
Additionally we organize a number of challenges at leading Computer Vision conference workshops.
Users that submit their results during the challenge period are eligible for awards.
Any user that cannot attend the workshop (direct or via a representative) will be excluded from the challenge, but will still be listed on the leaderboard.

### Workshop on Autonomous Driving, CVPR 2019
The first nuScenes detection challenge will be held at CVPR 2019.
Submission window opens in May 2019 and closes June 15th.
**Submission opened May 6 and closes June 12**.
To participate in the challenge, please create an account at [EvalAI](http://evalai.cloudcv.org/web/challenges/challenge-page/356).
Then upload your results file in JSON format and provide all of the meta data if possible: method name, description, project URL and publication URL.
The leaderboard will remain private until the end of the challenge.
Results and winners will be announced at the Workshop on Autonomous Driving ([WAD](https://sites.google.com/view/wad2019)) at [CVPR 2019](http://cvpr2019.thecvf.com/).
Please note that this workshop is not related to the similarly named [Workshop on Autonomous Driving Beyond Single-Frame Perception](wad.ai).

## Submission rules
* We release annotations for the train and val set, but not for the test set.
Expand All @@ -31,9 +40,13 @@ Results and winners will be announced at the Workshop on Autonomous Driving ([WA
* Users must to limit the number of submitted boxes per sample to 500.
* Every submission provides method information. We encourage publishing code, but do not make it a requirement.
* Top leaderboard entries and their papers will be manually reviewed.
* Each user or team can have at most one one account on the evaluation server.
* Each user or team can submit at most 3 results. These results must come from different models, rather than submitting results from the same model at different training epochs or with slightly different parameters.
* Any attempt to circumvent these rules will result in a permanent ban of the team or company from all nuScenes challenges.

## Results format
We define a standardized detection result format that serves as an input to the evaluation code.
Results are evaluated for each 2Hz keyframe, also known as `sample`.
The detection results for a particular evaluation set (train/val/test) are stored in a single JSON file.
For the train and val sets the evaluation can be performed by the user on their local machine.
For the test set the user needs to zip the single JSON result file and submit it to the official evaluation server.
Expand Down
36 changes: 19 additions & 17 deletions python-sdk/nuscenes/eval/detection/data_classes.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,32 +80,34 @@ def __init__(self,
num_pts: int = -1): # Nbr. LIDAR or RADAR inside the box. Only for gt boxes.

# Assert data for shape and NaNs.
assert type(sample_token) == str
assert type(sample_token) == str, 'Error: sample_token must be a string!'

assert len(translation) == 3
assert not np.any(np.isnan(translation))
assert len(translation) == 3, 'Error: Translation must have 3 elements!'
assert not np.any(np.isnan(translation)), 'Error: Translation may not be NaN!'

assert len(size) == 3
assert not np.any(np.isnan(size))
assert len(size) == 3, 'Error: Size must have 3 elements!'
assert not np.any(np.isnan(size)), 'Error: Size may not be NaN!'

assert len(rotation) == 4
assert not np.any(np.isnan(rotation))
assert len(rotation) == 4, 'Error: Rotation must have 4 elements!'
assert not np.any(np.isnan(rotation)), 'Error: Rotation may not be NaN!'

assert len(velocity) == 2 # Velocity can be NaN from our database for certain annotations.
# Velocity can be NaN from our database for certain annotations.
assert len(velocity) == 2, 'Error: Rotation must have 2 elements!'

assert detection_name in DETECTION_NAMES
assert detection_name is not None, 'Error: detection_name cannot be empty!'
assert detection_name in DETECTION_NAMES, 'Error: Unknown detection_name %s' % detection_name

assert attribute_name in ATTRIBUTE_NAMES or attribute_name == ''
assert attribute_name in ATTRIBUTE_NAMES or attribute_name == '', \
'Error: Unknown attribute_name %s' % attribute_name

assert type(ego_dist) == float
assert not np.any(np.isnan(ego_dist))
assert type(ego_dist) == float, 'Error: ego_dist must be a float!'
assert not np.any(np.isnan(ego_dist)), 'Error: ego_dist may not be NaN!'

assert type(detection_score) == float
assert not np.any(np.isnan(detection_score))
assert type(detection_score) == float, 'Error: detection_score must be a float!'
assert not np.any(np.isnan(detection_score)), 'Error: detection_score may not be NaN!'

assert type(num_pts) == int

assert not np.any(np.isnan(num_pts))
assert type(num_pts) == int, 'Error: num_pts must be int!'
assert not np.any(np.isnan(num_pts)), 'Error: num_pts may not be NaN!'

# Assign.
self.sample_token = sample_token
Expand Down
3 changes: 3 additions & 0 deletions python-sdk/nuscenes/eval/detection/evaluate.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,6 +148,9 @@ def render(self, metrics: DetectionMetrics, md_list: MetricDataList) -> None:
:param md_list: MetricDataList instance.
"""

if self.verbose:
print('Rendering PR and TP curves')

def savepath(name):
return os.path.join(self.plot_dir, name + '.pdf')

Expand Down
7 changes: 7 additions & 0 deletions python-sdk/nuscenes/eval/detection/loaders.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,20 @@

def load_prediction(result_path: str, max_boxes_per_sample: int, verbose: bool = False) -> Tuple[EvalBoxes, Dict]:
""" Loads object predictions from file. """

# Load from file and check that the format is correct.
with open(result_path) as f:
data = json.load(f)
assert 'results' in data, 'Error: No field `results` in result file. Please note that the result format changed.' \
'See https://www.nuscenes.org/object-detection for more information.'

# Deserialize results and get meta data.
all_results = EvalBoxes.deserialize(data['results'])
meta = data['meta']
if verbose:
print("Loaded results from {}. Found detections for {} samples."
.format(result_path, len(all_results.sample_tokens)))

# Check that each sample has no more than x predicted boxes.
for sample_token in all_results.sample_tokens:
assert len(all_results.boxes[sample_token]) <= max_boxes_per_sample, \
Expand Down
21 changes: 16 additions & 5 deletions python-sdk/nuscenes/eval/detection/render.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,19 @@ def setup_axis(xlabel: str = None,
title: str = None,
min_precision: float = None,
min_recall: float = None,
ax=None):
ax = None):
"""
Helper method that sets up the axis for a plot.
:param xlabel: x label text.
:param ylabel: y label text.
:param xlim: Upper limit for x axis.
:param ylim: Upper limit for y axis.
:param title: Axis title.
:param min_precision: Visualize minimum precision as horizontal line.
:param min_recall: Visualize minimum recall as vertical line.
:param ax: (optional) an existing axis to be modified.
:return: The axes object.
"""

if ax is None:
ax = plt.subplot()
Expand Down Expand Up @@ -272,7 +284,6 @@ def detailed_results_table_tex(metrics_path: str, output_path: str) -> None:
Renders a detailed results table in tex.
:param metrics_path: path to a serialized DetectionMetrics file.
:param output_path: path to the output file.
:return:
"""

with open(metrics_path, 'r') as f:
Expand All @@ -287,7 +298,7 @@ def detailed_results_table_tex(metrics_path: str, output_path: str) -> None:
'\\textbf{AAE} \\\\ \\hline ' \
'\\hline\n'
for name in DETECTION_NAMES:
ap = metrics['label_aps'][name]['2.0'] * 100
ap = np.mean(metrics['label_aps'][name].values())
ate = metrics['label_tp_errors'][name]['trans_err']
ase = metrics['label_tp_errors'][name]['scale_err']
aoe = metrics['label_tp_errors'][name]['orient_err']
Expand All @@ -304,7 +315,7 @@ def detailed_results_table_tex(metrics_path: str, output_path: str) -> None:
tex += '{} & {:.1f} & {:.2f} & {:.2f} & {:.2f} & {:.2f} & {:.2f} \\\\ ' \
'\\hline\n'.format(tex_name, ap, ate, ase, aoe, ave, aae)

map_ = metrics['mean_ap'] * 100
map_ = metrics['mean_ap']
mate = metrics['tp_errors']['trans_err']
mase = metrics['tp_errors']['scale_err']
maoe = metrics['tp_errors']['orient_err']
Expand All @@ -317,7 +328,7 @@ def detailed_results_table_tex(metrics_path: str, output_path: str) -> None:

# All one line
tex += '\\caption{Detailed detection performance. '
tex += 'AP: average precision (\%), '
tex += 'AP: average precision, '
tex += 'ATE: average translation error (${}$), '.format(TP_METRICS_UNITS['trans_err'])
tex += 'ASE: average scale error (${}$), '.format(TP_METRICS_UNITS['scale_err'])
tex += 'AOE: average orientation error (${}$), '.format(TP_METRICS_UNITS['orient_err'])
Expand Down

0 comments on commit 6761b7f

Please sign in to comment.