-
-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use oneDNN #74
Comments
Unfortunately while MKL-DNN is OSS, it depends on the closed-source MKL (rather than using a generic BLAS interface). So it would be harder to integrate with than NNPACK, which (I think) provides similar speedups in many cases. |
There is a build flag to turn off MKL usage: |
We are actively working on NNPACK in #67. My main issue with MKL-DNN is that it seems to work best if you build its computational graph thing, rather than exposing a simple CUDNN-style conv kernel. This is no expert opinion though, so if someone can hack up the right set of commands I'm on board with it. See also FluxML/Flux.jl#157 |
I think it doesn't have the MKL dependency anymore. |
I'm bumping this since there is now an in-progress PR for adding oneDNN to BinaryBuilder: JuliaPackaging/Yggdrasil#4550. The confluence of NNPACK being unmaintained, NNlib having dropped NNPACK and us not having much capacity to maintain kernels means that this is once again an attractive proposition. |
For improved CPU performance it would be grand if we could use (optionally) the (open-source) MKL-DNN library https://github.com/intel/mkl-dnn
The text was updated successfully, but these errors were encountered: