-
-
Notifications
You must be signed in to change notification settings - Fork 608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check for CUDA availability at run time. #916
Conversation
Seems sensible to me, given the options we have right now. |
59c4889
to
117a823
Compare
117a823
to
6e8f8c1
Compare
Updated to include newly released GPU packages that contain the necessary functionality. That also means only supporting Julia 1.2+ though.
OK, done. |
Dropping support for 1.0 is not ideal; is there no way the loading changes can be backported to previous Cu* releases? |
Yeah, maintaining backwards compatibility would be nice. |
Yes, JuliaLang/julia#31403 as used in CUDAnative. I'll have a look at providing llvmcall alternatives for certain functionality. |
Hi, I think I encounter an error after this PR. Using the latest master branch with the following code:
|
Yes, thanks for the report, looking into it. |
WIP: necessary changes over at CuArrays/CUDAdrv/CUDAnative have not been pushed/released yet.Continuing the package loading / conditional dependency saga, here is another attempt to cover all requirements. With the recent CUDAapi.jl-based scheme, we made it possible to add regular Pkg dependencies on GPU packages, but that had one important flaw: once the application (e.g. Flux) was precompiled without GPU support, there was no easy way for the user to "fix" GPU support and reload Flux. So we added some hacks to detect that and remove the compile cache during
__init__
.Furthermore, several users want the GPU packages and applications like Flux to be precompilable on a system without a GPU, e.g. the login node of a cluster, or during the build step of a container. This is similarly incompatible with conditionally loading GPU packages, which would bake the GPU-less state into the precompilation image.
There's also certain parts of our infrastructure, like Documenter (being tightly linked to Travis) and the new automatic package registry automerge bot, that expect packages to be loadable at all times.
Bottom line, the GPU stack should be loadable and precompilable regardless of wether it can and will be used. I'm working on exactly that right now, where CUDAdrv/CUDAnative/CuArrays would always be loadable, print a (silencable) warning if something goes, and have a
$module.functional()
method to query the state of the package. In this PR, I adapt Flux to use those APIs.Advantages:
Disadvantages:
gpu
are now type unstable (returning either anArray
orCuArray
)CUDNN is mandatory, as there's currently no easy way to conditionally enable that functionality based on a run time flagBoth disadvantages could be worked around by evaluating code at run time, but that would negate some of the precompilation advantages.
Thoughts? @MikeInnes