Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low accuracy on trillionpairs #554

Closed
tranvanhoa533 opened this issue Feb 21, 2019 · 11 comments
Closed

Low accuracy on trillionpairs #554

tranvanhoa533 opened this issue Feb 21, 2019 · 11 comments

Comments

@tranvanhoa533
Copy link

tranvanhoa533 commented Feb 21, 2019

Hello @nttstar
I train LResNet100E-IR with emore + asian dataset on 1080Ti gpu. The accuracy on trillionpairs challenge is very low:

         m         per_batch_size    num_gpu      batch_size     identification         verification
  1     0.3               62           8              496              71%                  70.6%
  2     0.5               62           7              434              55.4%                55%
  3     0.5               180          6              1080             29.7%                27%

(In third experiment, I used mxnet-memonger to decrease memory).

I am very puzzled. Can you point out my mistake, please ? Thank you very much.

@nttstar
Copy link
Collaborator

nttstar commented Feb 22, 2019

Make sure the merging process is correct. It should get 80%+ easily with m=0.5 arcface loss.

@tranvanhoa533
Copy link
Author

I downloaded glint asian and emore dataset from the link which you shared. After that, i run dataset_merge.py to merged them.

@tranvanhoa533
Copy link
Author

tranvanhoa533 commented Mar 4, 2019

Hi @nttstar
I did another experiment with your default parameters ( per-batch-size = 128, num-gpu = 4, r100, m=0.5 arcface loss , dataset: emore+asian) but the result on trillionpairs challenge was only 61.19% (Identification) and 59.17% (Verification). I tried to duplicate your experiment but i could not achieve your result (84%). Did you clean the dataset when you merged them ? Did you have other techniques ? Can you share them with me, please ? Thank you very much !

@jeremmyzong
Copy link

Make sure the merging process is correct. It should get 80%+ easily with m=0.5 arcface loss.

Hi @nttstar
Did you use pretrained model to deduplicate the dataset in merging process?
Thanks!

@jeremmyzong
Copy link

Hi @tranvanhoa533
I met the same problem, and I did experiment with bigger batch_size(per-batch-size = 128, num-gpu = 8), only get 43%(id) and 39%(verification). Have you solved it?

@tranvanhoa533
Copy link
Author

Hi @jeremmyzong
I still don't understand why that is. Have you solved it ?

@Talgin
Copy link

Talgin commented Jul 4, 2019

Hi @nttstar,
Could you provide an example command to run dataset_merge.py?
I'm running it second time, because after first time when trying to merge faces_emore and faces_glint it returned similar to faces_emore .idx, .rec, and property files. I mean, the size of those files were the same and property file said => 85772, 112, 112.

Thank you in advance!

@eguoguo321
Copy link

how to use mxnet-memonger,when I enabled that to be true, it met error.
what else should I do?

@maywander
Copy link

Did you merge these two datasets successfully?@Talgin

@Talgin
Copy link

Talgin commented Oct 9, 2019

Hi @maywander ,
As an answer you can look this thread #256
Thank you! :)

@maywander
Copy link

maywander commented Oct 15, 2019 via email

@nttstar nttstar closed this as completed Jun 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants