-
Notifications
You must be signed in to change notification settings - Fork 6
Patch for parallel option "offload" #105
Patch for parallel option "offload" #105
Conversation
Introduced by @reazulhoque. Possibly numba should provide a way to extend parallel options.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will certainly remove gen_spirv as a baked in Numba parallel option. If you had access at the right time in the compile process then you could monkey-patch this option back in from outside of default Numba. If these PRs are going to create a hook at the appropriate time then monkey patching would work.
Investigation results:
Final conclusion: We need to modify tests and examples first and then remove the option. /CC @diptorupd |
What is the proposal for how to convey offload mode to the compilation pipeline when compiling a function called within a with context? @PokhodenkoSA |
Sorry, I didn't understand the question. Could you please provide an example? It is possible that we missed something. |
Previously, compilation would look at the parallel flags to see if it should go through the GPU offload code. If that flag is removed from the parallel flags object then what will compilation look at to determine whether to go through the GPU offload code or not? |
Hi @DrTodd13 I think right now we check if we are in dpctl device_context (https://github.com/IntelPython/numba/blob/pydppl/numba/dppl/target_dispatcher.py#L63). |
@diptorupd @reazulhoque |
Yes, the semantics that we finalized is to not add any new |
Parallel option "offload" was removed. |
Introduced by @reazulhoque.
Possibly numba should provide a way to extend parallel options.
Field
gen_spirv
corresponds tooffload
option.gen_spirv
is used only innumba.dppl
.