Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training Speed is Slow #273

Closed
ajaykrishnan23 opened this issue Apr 5, 2021 · 2 comments
Closed

Training Speed is Slow #273

ajaykrishnan23 opened this issue Apr 5, 2021 · 2 comments

Comments

@ajaykrishnan23
Copy link

I tried training EfficientNet_B0 vs Resnet18 with the same image_size and other parameters to test their comparisons In speed. Despite Efficientnet being over 2x smaller than resnet18 with 12M parameters, it was still taking more time.

Any help here would be wonderful. Thanks in advance!

@gfotedar
Copy link

I see the same issue. I think at least part of the reason is that pytorch depth-wise convolution is very slow. See this. Try it in fp16 if you can by enabling autocast and change to latest cudnn and pytorch to get the current best optimized kernels.
I've done that and that has helped at least a bit but its still slower than resnet with same amount of parameters on fp16

@lukemelas
Copy link
Owner

Yes, as @gfotedar mentioned, PyTorch fp32 depthwise convs are quite slow. Also, since EfficientNet uses depthwise convs, it's always going to be slower than a model with the same number of parameters that does not use depthwise convs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants