-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem about use_mkldnn
flag
#10765
Comments
I think that makes sense. |
How about using
|
It would be great to have global |
@luotao1 Thank you for this issue. I will use the sum operator as an example to describe this problem. So, during the implementation of this operator I have encountered a few difficulties. The current version of the sum operator of MKLDNN algorithm is using inside of the other operators. Therefore the flag can not be uses in this way. For instance: To that end I agree with @kbinias. |
That makes sense, the user can
For example, such use case (the name of APIs is not important) set_default_use_mkldnn(True)
v1 = op0(v0) # use mkldnn by default if supported
# ...
v2 = op1(v1, use_mkldnn=False) # turn off MKLDNN for this operator
# ... The above function is better. |
We will have a discussion about how to add this global flag function in our framework ASAP. |
@luotao1 Any ETA for this? |
@mrysztow Discussed with @Superjomn and @tensor-tang, the newest design is:
We will finish it ASAP before next meeting. |
@luotao1 What's the reason for using an environment variable? Does it mean that some parameters will be passed as script's arguments and some of them will be passed as environment variables? |
I think an environment variable might be more general for both python scripts/c++ bin. Or do you have any suggestion? |
@Superjomn I think it is a good idea, as now we need to change all model scripts to add this option. But will it be consistent with focing usage of other backends, like gpu? |
That depends, the MKLDNN might be the first one that has a global switch in runtime. I suggest we use the environment variable first, with two singleton functions in both python and C++ to wrap it, called If the environment variable's name or this plan is changed, just update those two function is easier and flexible. |
@Superjomn My main concern was some sort of consistency and intuition behind passing parameters to the models scripts. I mean, a user will have to set an environment variable if they want to run something on CPU with MKLDNN, which is different from running something on GPU, that requires passing an argument to a model script. However, it's true what you're saying, this approach is more flexible, as it doesn't require changing scripts with models. |
Yeah, I understand your concern.
The environment variable is a low-level API and easy to hide in many ways. We can concentrate on the implementation of MKLDNN functions first, and have another discussion about this API after MKLDNN functions are ready and release to users. |
This issue is from Intel sides.
The problem we have with
use_mkldnn
flag is that we need to modify all models to enable newly integrated MKL-DNN ops. It would be great, if we can only add an integration and in case when Fluid is run with use mkldnn parameter, all available Ops are used without modification of model scripts, to pass the option.The text was updated successfully, but these errors were encountered: