Skip to content
/ mAP Public
forked from Cartucho/mAP

mean Average Precision - This code evaluates the performance of your neural net for object recognition.

License

Notifications You must be signed in to change notification settings

Kienlgk/mAP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mAP (mean Average Precision)

New GitHub stars

This code will evaluate the performance of your neural net for object recognition.

In practice, a higher mAP value indicates a better performance of your neural net, given your ground-truth and set of classes.

Table of contents

Explanation

The performance of your neural net will be judged using the mAP criterium defined in the PASCAL VOC 2012 competition. We simply adapted the official Matlab code into Python (in our tests they both give the same results).

First (1.), we calculate the Average Precision (AP), for each of the classes present in the ground-truth. Then (2.), we calculate the mean of all the AP's, resulting in an mAP value.

1. Calculate AP for each Class

2. Calculate mAP

Prerequisites

You need to install:

Optional:

  • plot the results by installing Matplotlib - Linux, macOS and Windows:
    1. python -mpip install -U pip
    2. python -mpip install -U matplotlib
  • show animation by installing OpenCV:
    1. python -mpip install -U pip
    2. python -mpip install -U opencv-python

Quick-start

To start using the mAP you need to clone the repo:

git clone https://github.com/Cartucho/mAP

Running the code

Step by step:

  1. Create the ground-truth files
  2. Move the ground-truth files into the folder ground-truth/
  3. Create the predicted objects files
  4. Move the predictions files into the folder predicted/
  5. Run the code: python main.py

Optional (if you want to see the animation):

  1. Insert the images into the folder images/

Create the ground-truth files

  • Create a separate ground-truth text file for each image.
  • Use matching names (e.g. image: "image_1.jpg", ground-truth: "image_1.txt"; "image_2.jpg", "image_2.txt"...).
  • In these files, each line should be in the following format:
    <class_name> <left> <top> <right> <bottom>
    
    , where <class_name> must have no whitespaces between words.
  • E.g. "image_1.txt":
    tvmonitor 2 10 173 238
    book 439 157 556 241
    book 437 246 518 351
    pottedplant 272 190 316 259
    

Create the predicted objects files

  • Create a separate predicted objects text file for each image.
  • Use matching names (e.g. image: "image_1.jpg", predicted: "image_1.txt"; "image_2.jpg", "image_2.txt"...).
  • In these files, each line should be in the following format:
    <class_name> <confidence> <left> <top> <right> <bottom>
    
    , where <class_name> must have no whitespaces between words.
  • E.g. "image_1.txt":
    tvmonitor 0.471781 0 13 174 244
    cup 0.414941 274 226 301 265
    book 0.460851 429 219 528 247
    chair 0.292345 0 199 88 436
    book 0.269833 433 260 506 336
    

Authors:

About

mean Average Precision - This code evaluates the performance of your neural net for object recognition.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%