Skip to content

TF's COCO evaluation wrapper doesn't actually support include_metrics_per_category and all_metrics_per_category #4778

@netanel-s

Description

@netanel-s

System information

  • What is the top-level directory of the model you are using: models/research/object_detection/
  • Have I written custom code: stock script of COCO's detection evaluation
  • OS Platform and Distribution: Linux Ubuntu 16.04
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 1.8.0
  • Bazel version (if compiling from source): -
  • CUDA/cuDNN version: CUDA 9
  • GPU model and memory: Titan XP, 12GB
  • Exact command to reproduce: object_detection/eval.py with checkpoint and config file of ssdlite_mobilenet_v2_coco_2018_05_09 with added metrics_set: "coco_detection_metrics" and include_metrics_per_category: true in eval_config.

Describe the problem

Flags include_metrics_per_category and all_metrics_per_category in eval_config of the config file of the model are not used, and therefore are always False.
Hence, the output metrics are always the usual 12 metrics of COCO's AP and AR for all categories together.

Source code / logs

In get_evaluators() of object_detection/evaluator.py:
EVAL_METRICS_CLASS_DICT[eval_metric_fn_key](categories=categories)
should also be using include_metrics_per_category and all_metrics_per_category in case they're existing attributes of eval_config.
But then I get the 'Category stats do not exist' in ComputeMetrics() of object_detection/metrics/coco_tools.py since COCOEvalWrapper instance doesn't have category_stats attribute.
I tried figuring out how these are supposed to be calculated and fed, but I'm not sure the code supports per category stats (while ComputeMetrics() suggests that it is).

A fix would be highly appreciated.
Thank you very much in advance.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions