Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add NormalizedKernel #274

Merged
merged 11 commits into from
Apr 15, 2021
1 change: 1 addition & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ version = "0.9.1"
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
Compat = "34da2185-b29b-5c13-b0c7-acf172513d20"
Distances = "b4f34e82-e78d-54a5-968a-f98e89d6e8f7"
FillArrays = "1a297f60-69ca-5386-bcde-b61e274b549b"
Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Expand Down
1 change: 1 addition & 0 deletions docs/src/kernels.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,7 @@ ScaledKernel
KernelSum
KernelProduct
KernelTensorProduct
NormalizedKernel
```

## Multi-output Kernels
Expand Down
4 changes: 3 additions & 1 deletion src/KernelFunctions.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ export RationalQuadraticKernel, GammaRationalQuadraticKernel
export GaborKernel, PiecewisePolynomialKernel
export PeriodicKernel, NeuralNetworkKernel
export KernelSum, KernelProduct, KernelTensorProduct
export TransformedKernel, ScaledKernel
export TransformedKernel, ScaledKernel, NormalizedKernel

export Transform,
SelectTransform,
Expand Down Expand Up @@ -52,6 +52,7 @@ using ZygoteRules: ZygoteRules
using StatsFuns: logtwo, twoπ
using StatsBase
using TensorCore
using FillArrays

abstract type Kernel end
abstract type SimpleKernel <: Kernel end
Expand Down Expand Up @@ -88,6 +89,7 @@ include(joinpath("basekernels", "wiener.jl"))

include(joinpath("kernels", "transformedkernel.jl"))
include(joinpath("kernels", "scaledkernel.jl"))
include(joinpath("kernels", "normalizedkernel.jl"))
include(joinpath("matrix", "kernelmatrix.jl"))
include(joinpath("kernels", "kernelsum.jl"))
include(joinpath("kernels", "kernelproduct.jl"))
Expand Down
81 changes: 81 additions & 0 deletions src/kernels/normalizedkernel.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
"""
NormalizedKernel(k::Kernel)

A normalized kernel derived from `k`.

# Definition

For inputs ``x, x'``, the normalized kernel ``\\widetilde{k}`` derived from
kernel ``k`` is defined as
```math
\\widetilde{k}(x, x'; k) = \\frac{k(x, x')}{\\sqrt{k(x, x) k(x', x')}}.
```
"""
struct NormalizedKernel{Tk<:Kernel} <: Kernel
kernel::Tk
end

@functor NormalizedKernel

(κ::NormalizedKernel)(x, y) = κ.kernel(x, y) / sqrt(κ.kernel(x, x) * κ.kernel(y, y))

function kernelmatrix(κ::NormalizedKernel, x::AbstractVector, y::AbstractVector)
return kernelmatrix(κ.kernel, x, y) ./
sqrt.(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it generally more efficient to ocmpute the sqrt first? This is probably a performance detail though (and I don't know about machine accuracy)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this all get fused anyway though?

kernelmatrix_diag(κ.kernel, x) .* permutedims(kernelmatrix_diag(κ.kernel, y))
)
end

function kernelmatrix(κ::NormalizedKernel, x::AbstractVector)
x_diag = kernelmatrix_diag(κ.kernel, x)
return kernelmatrix(κ.kernel, x) ./ sqrt.(x_diag .* permutedims(x_diag))
end

function kernelmatrix_diag(κ::NormalizedKernel, x::AbstractVector)
first_x = first(x)
return Fill(κ(first_x, first_x), length(x))
Copy link
Member

@theogf theogf Apr 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we replace Fill by Ones here?

Suggested change
return Fill(κ(first_x, first_x), length(x))
return Ones{typeof(κ(first_x, first_x))}(length(x))

Copy link
Member

@theogf theogf Apr 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or can some kernel return a negative value...
In which case both propositions are wrong :p

Copy link
Contributor Author

@rossviljoen rossviljoen Apr 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Look like this fixes the AD at least, cheers :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose k(x, x) could be zero as well?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But then we're in trouble cause we would be dividing by 0 and you would probably have bigger problems generally.

end

function kernelmatrix_diag(κ::NormalizedKernel, x::AbstractVector, y::AbstractVector)
return kernelmatrix_diag(κ.kernel, x, y) ./
sqrt.(kernelmatrix_diag(κ.kernel, x) .* kernelmatrix_diag(κ.kernel, y))
end

function kernelmatrix!(
K::AbstractMatrix, κ::NormalizedKernel, x::AbstractVector, y::AbstractVector
)
kernelmatrix!(K, κ.kernel, x, y)
K ./=
sqrt.(kernelmatrix_diag(κ.kernel, x) .* permutedims(kernelmatrix_diag(κ.kernel, y)))
return K
end

function kernelmatrix!(K::AbstractMatrix, κ::NormalizedKernel, x::AbstractVector)
kernelmatrix!(K, κ.kernel, x)
x_diag = kernelmatrix_diag(κ.kernel, x)
K ./= sqrt.(x_diag .* permutedims(x_diag))
return K
end

function kernelmatrix_diag!(
K::AbstractVector, κ::NormalizedKernel, x::AbstractVector, y::AbstractVector
)
kernelmatrix_diag!(K, κ.kernel, x, y)
K ./= sqrt.(kernelmatrix_diag(κ.kernel, x) .* kernelmatrix_diag(κ.kernel, y))
return K
end

function kernelmatrix_diag!(K::AbstractVector, κ::NormalizedKernel, x::AbstractVector)
first_x = first(x)
return fill!(K, κ(first_x, first_x))
end

Base.show(io::IO, κ::NormalizedKernel) = printshifted(io, κ, 0)
devmotion marked this conversation as resolved.
Show resolved Hide resolved

function printshifted(io::IO, κ::NormalizedKernel, shift::Int)
println(io, "Normalized Kernel:")
for _ in 1:(shift + 1)
print(io, "\t")
end
return printshifted(io, κ.kernel, shift + 1)
end
16 changes: 16 additions & 0 deletions test/kernels/normalizedkernel.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
@testset "normalizedkernel" begin
rng = MersenneTwister(123456)
x = randn(rng)
y = randn(rng)

k = 4 * SqExponentialKernel()
kn = NormalizedKernel(k)
@test kn(x, y) == k(x, y) / sqrt(k(x, x) * k(y, y))
@test kn(x, x) ≈ one(x) atol = 1e-5

# Standardised tests.
TestUtils.test_interface(kn, Float64)
test_ADs(x -> NormalizedKernel(exp(x[1]) * SqExponentialKernel()), rand(1))

test_params(kn, k)
end
1 change: 1 addition & 0 deletions test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,7 @@ include("test_utils.jl")
include(joinpath("kernels", "overloads.jl"))
include(joinpath("kernels", "scaledkernel.jl"))
include(joinpath("kernels", "transformedkernel.jl"))
include(joinpath("kernels", "normalizedkernel.jl"))
end
@info "Ran tests on Kernel"

Expand Down