You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: Support evaluation with non-integer image IDs
Previously, the evaluator could only handle image IDs (and therefore image names) that consisted of digits only. Now it can handle arbitrary image names.
The official Matlab evaluation algorithm uses a stable sorting algorithm, so this algorithm is only guaranteed
572
567
to behave identically if you choose 'mergesort' as the sorting algorithm, but it will almost always behave identically
573
568
even if you choose 'quicksort' (but no guarantees).
574
-
pred_format (dict, optional): In what format to expect the predictions. This argument usually doesn't need be touched,
575
-
because the default setting matches what `predict_on_dataset()` outputs.
576
569
verbose (bool, optional): If `True`, will print out the progress during runtime.
577
570
ret (bool, optional): If `True`, returns the true and false positives.
578
571
@@ -587,13 +580,6 @@ def match_predictions(self,
587
580
ifself.prediction_resultsisNone:
588
581
raiseValueError("There are no prediction results. You must run `predict_on_dataset()` before calling this method.")
589
582
590
-
image_id_pred=pred_format['image_id']
591
-
conf_pred=pred_format['conf']
592
-
xmin_pred=pred_format['xmin']
593
-
ymin_pred=pred_format['ymin']
594
-
xmax_pred=pred_format['xmax']
595
-
ymax_pred=pred_format['ymax']
596
-
597
583
class_id_gt=self.gt_format['class_id']
598
584
xmin_gt=self.gt_format['xmin']
599
585
ymin_gt=self.gt_format['ymin']
@@ -605,7 +591,7 @@ def match_predictions(self,
605
591
ground_truth= {}
606
592
eval_neutral_available=not (self.data_generator.eval_neutralisNone) # Whether or not we have annotations to decide whether ground truth boxes should be neutral or not.
0 commit comments