-
-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
create NNlibCUDA sub-package #286
Conversation
@DhairyaLGandhi could you add buildkite to the repo and get rid of AppVeyor? |
Tests are passing locally on top of JuliaGPU/CUDA.jl#753. This is pretty much done. Before merging, we should test it against JuliaGPU/CUDA.jl#753 (once it is merged in master) using buildkite (bump on @DhairyaLGandhi for activating it) |
CUDA master removed NNlib, we should hurry up with this. @DhairyaLGandhi can you activate buildkite for this repo? |
Follow up to discussion in JuliaGPU/CUDA.jl#738 and on zulip.
The idea is to create a package within the same repo of NNlib.jl to make maintenance easier.
This a port and factorization of the nnlib.jl file and related in CUDA.jl, so that CUDA.jl can drop the NNlib dependence (pr JuliaGPU/CUDA.jl#753)
Besides this PR, Flux.jl should remove the dependence on
CUDA.batchnorm
and use the new cudnn wrapper interface.The CI pipeline is copied from https://github.com/JuliaGPU/KernelAbstractions.jl
cc @maleadt @denizyuret
THINGS TO DO AFTER MERGE: