Skip to content

Regarding the evaluation of your pre-trained model  #8

Open
@sandeep-ipk

Description

@sandeep-ipk

I have come to the problem of evaluating your the pre-trained ShAPO model you provided in the repo here
I could not find an evaluation script in your ShAPO repository, and I found a similar issue in your CenterSnap repo here. The author of the issue states problems with finding the predicted class labels and sizes, then in one of your replies you provide this helper function and ask them to use mask_rcnn results from the object-derformnet repository.

I have done all the things you asked in that issue. However, I used your pre-trained shapo model (without post-optimization) for evaluation, instead of training one of my own from scratch. But I cannot reproduce the numbers you report in ShAPO paper (assuming your pre-trained model performs as well as CenterSnap's numbers). therefore I have the following questions:

  1. Is the pre-trained ShAPO you provided not an optimal one, but an intermediate one? and therefore I cannot reproduce the numbers (without post-optimization). Moreover, does using your pretrained ShAPO model without post-optimization give similar numbers to CenterSnap?

  2. How can one determine the f_size in result['pred_scales'] = f_size statement you write in that issue. I calculate the f_size using the predicted point-cloud from the predicted shape latents using this line of code from object-deformnet. As i understand, this f_size is important in calculating the 3D IoU numbers you report in the ShAPO paper.

  3. To alleviate this confusion, is there a possibility that you could share the evaluation script that you used to generate the numbers using the compute_mAP function as you mentioned in that github Issue?

Thank you,
Sandeep

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions