-
Notifications
You must be signed in to change notification settings - Fork 3
Closed
Description
Hi,
thanks for your great work!
I met some problems while reproducing your quantitative results. May I ask a few questions about in-distribution evaluation (ADE20K dataset)?
- Which code did you use to calculate the FID? Is it this one? https://github.com/mseitzer/pytorch-fid
- What resolution did you evaluate the FID metric on, 256x256 or 512x512?Which interpolation method did you use to resize the ground-truth images and synthesized images?
- Did you use this code to calculate the mIoU of ADE20K? https://github.com/CSAILVision/semantic-segmentation-pytorch And is this model that you used to predict the semantic label of synthesized images? http://sceneparsing.csail.mit.edu/model/pytorch/ade20k-resnet101-upernet/
- If so, did you change the config file during your mIoU evaluation? https://github.com/CSAILVision/semantic-segmentation-pytorch/blob/8f27c9b97d2ca7c6e05333d5766d144bf7d8c31b/config/ade20k-resnet101-upernet.yaml#L6 . Besides, what is the resolution of your ground-truth label?
Hoping to hear from you, thanks again for your great job.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels