Can't achieve the performance reported in the paper #30
Description
Hi,
This is an interesting work.
But we can't achieve the performance reported in PRM (mAP50: 26.8 with MCG proposals).
We can only get 11.5 mAP50 with MCG proposals downloaded from https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/mcg/
and 21.5 mAP50 with COB proposals downloaded from http://www.vision.ee.ethz.ch/~cvlsegmentation/cob/code.html.
We use the default parameters of PRM (https://github.com/ZhouYanzhao/PRM/blob/pytorch/demo/config.yml) to train the classification network, (change the train_splits from trainval to trianaug, of course). But we notice that both the quality of the peaks and the instance masks are worse than those reported in the paper,
So we wonder if you use other hyper-parameter settings in your experiments?
Besides, according to our observations, the MCG proposals from https://data.vision.ee.ethz.ch/jpont/mcg/MCG-Pascal-Segmentation_trainvaltest_2012-proposals.tgz are much worse than the those shown in the paper and the supplement material. So do we need to retrain MCG with PASCAL train set to generate better proposals?
Would you please point out the differences between my experiments and yours that may results in the gap? or could you give us some advice to boost the performance?
Thanks a lot.