Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU Attempt 2 #472

Draft
wants to merge 12 commits into
base: master
Choose a base branch
from
Draft

GPU Attempt 2 #472

wants to merge 12 commits into from

Conversation

willtebbutt
Copy link
Member

@willtebbutt willtebbutt commented Aug 31, 2022

Summary

Supercedes #470 . I messed up some of my gitconfig stuff and had commits with the wrong username etc. Easiest thing was to start again

Proposed changes

  • ...

What alternatives have you considered?

Breaking changes

edit:

TODO:

  1. sort out codecov on buildkite branch
  2. sort out formatting

src/TestUtils.jl Show resolved Hide resolved
src/TestUtils.jl Show resolved Hide resolved
src/TestUtils.jl Show resolved Hide resolved
src/TestUtils.jl Show resolved Hide resolved
src/TestUtils.jl Show resolved Hide resolved
test/cuda.jl Show resolved Hide resolved
test/cuda.jl Show resolved Hide resolved
test/cuda.jl Show resolved Hide resolved
test/cuda.jl Show resolved Hide resolved
test/cuda.jl Show resolved Hide resolved
@willtebbutt willtebbutt changed the title Attempt 2 GPU Attempt 2 Aug 31, 2022
@codecov
Copy link

codecov bot commented Aug 31, 2022

Codecov Report

Base: 90.00% // Head: 86.53% // Decreases project coverage by -3.47% ⚠️

Coverage data is based on head (12fe669) compared to base (dede8a0).
Patch coverage: 18.57% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #472      +/-   ##
==========================================
- Coverage   90.00%   86.53%   -3.48%     
==========================================
  Files          52       53       +1     
  Lines        1351     1478     +127     
==========================================
+ Hits         1216     1279      +63     
- Misses        135      199      +64     
Impacted Files Coverage Δ
src/KernelFunctions.jl 100.00% <ø> (ø)
src/distances/cuda.jl 0.00% <0.00%> (ø)
src/TestUtils.jl 47.36% <6.55%> (-47.37%) ⬇️
src/basekernels/fbm.jl 96.07% <77.77%> (-3.93%) ⬇️
src/basekernels/exponential.jl 100.00% <100.00%> (ø)
src/basekernels/matern.jl 100.00% <100.00%> (ø)
src/basekernels/wiener.jl 92.85% <100.00%> (ø)
src/transform/scaletransform.jl 100.00% <0.00%> (+13.33%) ⬆️
src/transform/periodic_transform.jl 50.00% <0.00%> (+20.00%) ⬆️
src/transform/transform.jl 100.00% <0.00%> (+40.00%) ⬆️
... and 5 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@willtebbutt willtebbutt mentioned this pull request Aug 31, 2022
src/TestUtils.jl Show resolved Hide resolved
src/basekernels/matern.jl Show resolved Hide resolved
test/cuda.jl Show resolved Hide resolved
test/cuda.jl Show resolved Hide resolved
test/cuda.jl Show resolved Hide resolved
@willtebbutt
Copy link
Member Author

@theogf there are a couple of minor things left to do, but I think this is defintely ready for review

Copy link
Member

@theogf theogf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow that's an amazing load of good work. My only concern is that a lot of the code consist in getting pairwise done right and in a better world this should be done in an external package. Especially because of the new CUDA dependency...

@@ -176,27 +190,108 @@ function test_interface(k::Kernel, T::Type{<:Real}=Float64; kwargs...)
return test_interface(Random.GLOBAL_RNG, k, T; kwargs...)
end

const FloatType = Union{Float32, Float64}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not AbstractFloat?

@@ -47,6 +47,7 @@ export IndependentMOKernel,
export tensor, ⊗, compose

using Compat
using CUDA
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless things changed, that's quite a massive dependency no? I saw CuArrays.jl got deprecated, does this mean that CUDA is light now?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we already depend on Requires, maybe it could be hidden behind a @require?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively, maybe the lightweight GPUArraysCore (https://github.com/JuliaGPU/GPUArrays.jl/tree/master/lib/GPUArraysCore) is sufficient?

@@ -0,0 +1,14 @@
steps:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noice!

end

_to_cpu(x::CuArray{<:Real}) = Array(x)
_to_cpu(x::ColVecs{<:Real, <:CuMatrix}) = ColVecs(_to_cpu(x.X))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we just overload cpu on our type? (or cpu is part of Flux? I don't remember...)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we use/define Adapt.adapt? And let users pass the types they want to adapt it to?
According to the README of Adapt, we probably would want to define

Adapt.adapt_structure(to, x::ColVecs{<:Real}) = ColVecs(adapt(to, x.X))
Adapt.adapt_structure(to, x::RowVecs{<:Real}) = RowVecs(adapt(to, x.X))

And then one could use e.g. adapt(CuArray, x) or adapt(Array, x) where x could be a ColVecs{<:Real, Matrix{<:Real}}/RowVecs{<:Real,Matrix{<:Real}} or a ColVecs{<:Real,<:CuMatrix}/RowVecs{<:Real,<:CuMatrix} (for the other direction). If the testing function allows to specify the desired type (such as Array or CuArray) users/developers could pass it and we don't have to deal with it in KernelFunctions here, maybe?

@@ -77,7 +77,7 @@ end

Matern32Kernel(; metric=Euclidean()) = Matern32Kernel(metric)

kappa(::Matern32Kernel, d::Real) = (1 + sqrt(3) * d) * exp(-sqrt(3) * d)
kappa(::Matern32Kernel, d::T) where {T<:Real} = (1 + T(sqrt(3)) * d) * exp(-T(sqrt(3)) * d)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't T(sqrt(3)), or at least sqrt(3) be precalculated?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One could use

Suggested change
kappa(::Matern32Kernel, d::T) where {T<:Real} = (1 + T(sqrt(3)) * d) * exp(-T(sqrt(3)) * d)
function kappa(::Matern32Kernel, d::Real)
x = sqrt3 * d
return (1 + x) * exp(-x)
end

and load IrrationalConstants.sqrt3 in src/KernelFunctions.jl

@@ -108,7 +108,7 @@ end

Matern52Kernel(; metric=Euclidean()) = Matern52Kernel(metric)

kappa(::Matern52Kernel, d::Real) = (1 + sqrt(5) * d + 5 * d^2 / 3) * exp(-sqrt(5) * d)
kappa(::Matern52Kernel, d::T) where {T<:Real} = (1 + T(sqrt(5)) * d + 5 * d^2 / 3) * exp(-T(sqrt(5)) * d)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here with sqrt(5)

Copy link
Member

@devmotion devmotion Sep 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, IrrationalConstants doesn't define sqrt5 (maybe should be added?) but one could still improve it a bit:

Suggested change
kappa(::Matern52Kernel, d::T) where {T<:Real} = (1 + T(sqrt(5)) * d + 5 * d^2 / 3) * exp(-T(sqrt(5)) * d)
function kappa(::Matern52Kernel, d::Real)
x = sqrt(oftype(d, 5))
return (1 + x + 5 * d^2 / 3) * exp(-x)
end

This would also be a bit safer as it also works if d is not a floating point number.

@@ -62,22 +62,22 @@ function (::WienerKernel{1})(x, y)
X = sqrt(sum(abs2, x))
Y = sqrt(sum(abs2, y))
minXY = min(X, Y)
return 1 / 3 * minXY^3 + 1 / 2 * minXY^2 * euclidean(x, y)
return minXY^3 / 3 + minXY^2 * euclidean(x, y) / 2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catches here!

Comment on lines +51 to +58
_ones(el_type, x::AbstractVector, sz...) = ones(el_type, sz...)
function _ones(
el_type,
x::Union{CuVector{<:Real}, ColVecs{<:Real, <:CuMatrix}, RowVecs{<:Real, <:CuMatrix}},
sz...,
)
return CUDA.ones(el_type, sz...)
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we use adapt here?

Comment on lines +2 to +6
const CuSubArray{T} = SubArray{T, <:Any, <:CuArray}
const CuColVecs{T} = ColVecs{T, <:Union{CuMatrix{T}, CuSubArray{T}}}
const CuRowVecs{T} = RowVecs{T, <:Union{CuMatrix{T}, CuSubArray{T}}}

const _CuVector{T} = Union{CuVector{T}, SubArray{T, 1, <:CuArray}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to restrict it to these types? Could we just define our pairwise (IIRC we own it?) in this way for generic AbstractVector/ColVecs/RowVecs, and only forward the Vector{<:Real}/ColVecs{<:Real,Matrix{<:Real}}/... to Distances?

Copy link
Member

@devmotion devmotion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@willtebbutt Do you think it will take some time to sort out all remaining comments and issues in this PR? If so, we could add only the buildkite configuration as a first step to fix CI errors and add GPU-specific tests in a follow-up PR, similar to JuliaGaussianProcesses/AbstractGPs.jl#335.

plugins:
- JuliaCI/julia#v1:
version: "1"
- JuliaCI/julia-test#v1: ~
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- JuliaCI/julia-test#v1: ~
- JuliaCI/julia-test#v1: ~
- JuliaCI/julia-coverage#v1:
codecov: true

timeout_in_minutes: 60

env:
GROUP: CUDA
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
GROUP: CUDA
GROUP: CUDA
SECRET_CODECOV_TOKEN: "t5UlmhN6IORkngUHsuzYWkMJoIcAu+appM1fZJxZitCmw8gn8EA4jukHSaps23tMpVI19qgHr/F6XeS5M/SG1+lrSa/cG2e2kAD03E34vhK0+P6Fx8CZxL3RsIc1XDSX9qEs1/BGUNHP8t0B/vQumJRqPH5F+IGXhR5yRolhqquJZ5OUHwMGo2+FWY12YWehGjXOTCy/y0f0vYAysLU+TPF2Xa8xpDEJtCQlFDFVSPwtBFqB+8XD9bmtGQkDFfZOw0/5dHCWKMmr/E3Z9xXHgIk74mN91PtVJZDQjEVZOrPLOMOIheTtQFSErVXoKXBXElorcAY96oJQVuvqK61H/A==;U2FsdGVkX18ne7l2AH/iweW0yXOQV+yawNU6Ax21o+vkqTztlWAelcsyPaaMFSqpsYT0+uHAHQTPfqzuKJH7QA=="

@willtebbutt
Copy link
Member Author

@willtebbutt Do you think it will take some time to sort out all remaining comments and issues in this PR? If so, we could add only the buildkite configuration as a first step to fix CI errors and add GPU-specific tests in a follow-up PR, similar to JuliaGaussianProcesses/AbstractGPs.jl#335.

That would be ideal. I thought I was going to have time to follow up on this immediately, but it's actually going to be a little while. If you could open a PR like the one you made to AbstractGPs, that would be ideal.

@willtebbutt willtebbutt marked this pull request as draft September 15, 2023 19:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants