This repo implements a CBIR (content-based image retrieval) system
In this system, I implement several popular image features:
- color-based
- texture-based
- shape-based
- deep methods
Some features are not robust enough, and turn to feature fusion
The curse of dimensionality told that vectors in high dimension will sometimes lose distance property
CBIR system retrieval images based on feature similarity
Robustness of system is evaluated by MMAP (mean MAP), the evaluation method is refer to here
- image AP : average of precision at each hit
- depth=K means the system will return top-K images
- a correct image in top-K is called a hit
- AP = (hit1.precision + hit2.precision + ... + hitH.precision) / H
- class1 MAP = (class1.img1.AP + class1.img2.AP + ... + class1.imgM.AP) / M
- MMAP = (class1.MAP + class2.MAP + ... + classN.MAP) / N
Implementation of evaluation can found at evaluate.py
My database contains 25 classes, each class with 20 images, 500 images in total, depth=K will return top-K images from database
Method | color | daisy | edge | gabor | HOG | vgg19 | resnet152 |
---|---|---|---|---|---|---|---|
Mean MAP (depth=10) | 0.614 | 0.468 | 0.301 | 0.346 | 0.450 | 0.914 | 0.944 |
Let me show some results of the system
If you are interesting with the results, and want to try your own images,
Please refer to USAGE.md
The details are written inside.
Po-Chih Huang / @brianhuang1019