Description
I have come to the problem of evaluating your the pre-trained ShAPO model you provided in the repo here
I could not find an evaluation script in your ShAPO repository, and I found a similar issue in your CenterSnap repo here. The author of the issue states problems with finding the predicted class labels and sizes, then in one of your replies you provide this helper function and ask them to use mask_rcnn results from the object-derformnet repository.
I have done all the things you asked in that issue. However, I used your pre-trained shapo model (without post-optimization) for evaluation, instead of training one of my own from scratch. But I cannot reproduce the numbers you report in ShAPO paper (assuming your pre-trained model performs as well as CenterSnap's numbers). therefore I have the following questions:
-
Is the pre-trained ShAPO you provided not an optimal one, but an intermediate one? and therefore I cannot reproduce the numbers (without post-optimization). Moreover, does using your pretrained ShAPO model without post-optimization give similar numbers to CenterSnap?
-
How can one determine the
f_size
inresult['pred_scales'] = f_size
statement you write in that issue. I calculate thef_size
using the predicted point-cloud from the predicted shape latents using this line of code from object-deformnet. As i understand, thisf_size
is important in calculating the 3D IoU numbers you report in the ShAPO paper. -
To alleviate this confusion, is there a possibility that you could share the evaluation script that you used to generate the numbers using the compute_mAP function as you mentioned in that github Issue?
Thank you,
Sandeep