-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance and backbone network #17
Comments
please John1231983, I test the model provided by the author on LFW but the result is 0.661 not 99.+. So, please I need a link to LFW and its pair file. I think the problem for me is LFW itself. Thanks in advance |
Hi, Did you solve your problem? I also have the same problem and want to know any solution for that issue. |
@quangtn266 |
I had this problem when I didn't specify the checkpoint in the config file under BACKBONE_RESUME_ROOT |
Thanks for sharing a great work. In your code, you provided some backbone network
In the Model Zoo , the insightface provide LResNet100E-IR
This ís your result that trained from scratch with IR-101 using your setting.
And this is your report
I have some questions after reading your code:
Do you use augmentation (I found only RandomHorizontalFlip has been applied)? Insightface team used augmentation such as flip, ColorJitterAug, compress_aug... https://github.com/deepinsight/insightface/blob/3866cd77a6896c934b51ed39e9651b791d78bb57/recognition/image_iter.py#L207?
I am using 4GPU with batch size of 700/each GPU. My performance is smaller than your report. Do you think number of GPU is the reason (you used 8 GPUs)?
Does your
IR_101
same withLResNet100E-IR
in term of number of FLOP and params? I found that you save backbone and head seperately, while insighface saved them into one model? Any difference?Have you measure the inference speed of IR_101? I feel it too slow than mxnet
The text was updated successfully, but these errors were encountered: