-
Notifications
You must be signed in to change notification settings - Fork 180
Description
Following the instructions at issue #24 and #19, I was able to successfully test on the CMU Panoptic Dataset using the provided pretrained Human36M weights (more specifics here) on the volumetric models, with a snapshot of some of the results below:


Issues
However, despite following all 4 pointers in #24, I still have issues with the problems with some of the keypoint detections (especially with the predictions the lower body being completely off).

Is it possible that the pretrained (H36M) model is unable to handle cases where the lower body is truncated, and thus results in the wrong predictions above?
Notes/Documentation
To those who would like to recreate the results and evaluate on the CMU dataset, note there are many changes that need to be made. I list the important ones below; check my forked repository for the rest.
- You will need to create your own custom
CMUPanopticDatasetclass, similar to theHuman36MMultiviewDatasetclass inmvn/datasets/human36m.py. You will also need the ground truth BBOXes in the link in issue Creating new "ground truth" for several datasets #19, and generate your own labels file. If you are lazy, follow my pre-processing instructions here, but note that there may be missing documentation here and there. - As noted in issue testing on the CMU Panoptic dataset #24, units are a big issue. CMU keypoints are in mm while Human36M are in cm. Note that since the model was trained on the Human36M, the predicted keypoints and the ground truth keypoints need to be "synced" by appropriate scaling factors.
- UPDATE: If like me, you used the volumetric model without first running on the algebraic model, you need to specify
use_gt_pelvisto be true in the yaml config file.
For those who are interested, I have updated the documentation in my repository at https://github.com/Samleo8/learnable-triangulation-pytorch.