LCFCN - ECCV 2018 (Try in a Colab)
Make the segmentation model learn to count and localize objects by adding a single line of code. Instead of applying the cross-entropy loss on dense per-pixel labels, apply the lcfcn loss on point-level annotations.
pip install git+https://github.com/ElementAI/LCFCN
from lcfcn import lcfcn_loss
# compute an CxHxW logits mask using any segmentation model
logits = seg_model.forward(images)
# compute loss given 'points' as HxW mask (1 pixel label per object)
loss = lcfcn_loss.compute_loss(points=points, probs=logits.sigmoid())
loss.backward()
pip install -r requirements.txt
This command installs pydicom and the Haven library which helps in managing the experiments.
-
Shanghai Dataset
-
Trancos Dataset
wget http://agamenon.tsc.uah.es/Personales/rlopez/data/trancos/TRANCOS_v3.tar.gz
python trainval.py -e trancos -d <datadir> -sb <savedir_base> -r 1
<datadir>
is where the dataset is located.<savedir_base>
is where the experiment weights and results will be saved.-e trancos
specifies the trancos training hyper-parameters defined inexp_configs.py
.
> jupyter nbextension enable --py widgetsnbextension --sys-prefix
> jupyter notebook
from haven import haven_jupyter as hj
from haven import haven_results as hr
try:
%load_ext google.colab.data_table
except:
pass
# path to where the experiments got saved
savedir_base = <savedir_base>
# filter exps
filterby_list = None
# get experiments
rm = hr.ResultManager(savedir_base=savedir_base,
filterby_list=filterby_list,
verbose=0)
# dashboard variables
title_list = ['dataset', 'model']
y_metrics = ['val_mae']
# launch dashboard
hj.get_dashboard(rm, vars(), wide_display=True)
This script outputs the following dashboard
If you find the code useful for your research, please cite:
@inproceedings{laradji2018blobs,
title={Where are the blobs: Counting by localization with point supervision},
author={Laradji, Issam H and Rostamzadeh, Negar and Pinheiro, Pedro O and Vazquez, David and Schmidt, Mark},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
pages={547--562},
year={2018}
}