Skip to content

Commit

Permalink
Rename Hypercube to HypercubeGraph for clarity
Browse files Browse the repository at this point in the history
  • Loading branch information
vabor112 committed Aug 13, 2024
1 parent 6720588 commit 2099a96
Show file tree
Hide file tree
Showing 11 changed files with 52 additions and 51 deletions.
1 change: 0 additions & 1 deletion docs/examples/Hypercube.nblink

This file was deleted.

1 change: 1 addition & 0 deletions docs/examples/HypercubeGraph.nblink
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"path": "../../notebooks/HypercubeGraph.ipynb"}
6 changes: 3 additions & 3 deletions docs/theory/hypercube.rst → docs/theory/hypercube_graph.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@
################################

.. warning::
You can get by fine without reading this page for almost all use cases, just use the standard :class:`~.kernels.MaternGeometricKernel`, following the respective :doc:`example notebook </examples/Hypercube>`.
You can get by fine without reading this page for almost all use cases, just use the standard :class:`~.kernels.MaternGeometricKernel`, following the respective :doc:`example notebook </examples/HypercubeGraph>`.

This is optional material meant to explain the basic theory and based mainly on :cite:t:`borovitskiy2023`.

==========================
Motivation
==========================

The :class:`~.spaces.Hypercube` space $C^d$ can be used to model $d$-dimensional *binary vector* inputs.
The :class:`~.spaces.HypercubeGraph` space $C^d$ can be used to model $d$-dimensional *binary vector* inputs.

There are many settings where inputs are binary vectors or can be represented as such. For instance, upon flattening, binary vectors represent adjacency matrices of *unweighted labeled graphs* [#]_.

Expand Down Expand Up @@ -97,7 +97,7 @@ where $m$ is the Hamming distance between $x$ and $x'$, and $L \leq d + 1$ is th

**Notes:**

#. We define the dimension of the :class:`~.spaces.Hypercube` space $C^d$ to be $d$, in contrast to the graphs represented by the :class:`~.spaces.Graph` space, whose dimension is defined to be $0$.
#. We define the dimension of the :class:`~.spaces.HypercubeGraph` space $C^d$ to be $d$, in contrast to the graphs represented by the :class:`~.spaces.Graph` space, whose dimension is defined to be $0$.

Because of this, much like in the Euclidean or the manifold case, the $1/2, 3/2, 5/2$ *are* in fact reasonable values of for the smoothness parameter $\nu$.

Expand Down
2 changes: 1 addition & 1 deletion docs/theory/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@
Kernels on product spaces <product_spaces>
Product kernels <product_kernels>
Feature maps and sampling <feature_maps>
Hypercube graph space <hypercube>
Hypercube graph space <hypercube_graph>
4 changes: 2 additions & 2 deletions geometric_kernels/kernels/matern_kernel.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
DiscreteSpectrumSpace,
Graph,
Hyperbolic,
Hypercube,
HypercubeGraph,
Hypersphere,
Mesh,
NoncompactSymmetricSpace,
Expand Down Expand Up @@ -200,7 +200,7 @@ def default_num(space: DiscreteSpectrumSpace) -> int:
return min(
MaternGeometricKernel._DEFAULT_NUM_EIGENFUNCTIONS, space.num_vertices
)
elif isinstance(space, Hypercube):
elif isinstance(space, HypercubeGraph):
return min(MaternGeometricKernel._DEFAULT_NUM_LEVELS, space.dim + 1)
else:
return MaternGeometricKernel._DEFAULT_NUM_LEVELS
Expand Down
2 changes: 1 addition & 1 deletion geometric_kernels/spaces/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
from geometric_kernels.spaces.circle import Circle
from geometric_kernels.spaces.graph import Graph
from geometric_kernels.spaces.hyperbolic import Hyperbolic
from geometric_kernels.spaces.hypercube import Hypercube
from geometric_kernels.spaces.hypercube_graph import HypercubeGraph
from geometric_kernels.spaces.hypersphere import Hypersphere
from geometric_kernels.spaces.lie_groups import CompactMatrixLieGroup
from geometric_kernels.spaces.mesh import Mesh
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""
This module provides the :class:`Hypercube` space and the respective
This module provides the :class:`HypercubeGraph` space and the respective
:class:`~.eigenfunctions.Eigenfunctions` subclass :class:`WalshFunctions`.
"""

Expand All @@ -25,8 +25,9 @@

class WalshFunctions(EigenfunctionsWithAdditionTheorem):
r"""
Eigenfunctions of graph Laplacian on the hypercube $C^d = \{0, 1\}^d$ are
Walsh functions $w_T: C^d \to \{-1, 1\}$ given by
Eigenfunctions of graph Laplacian on the hypercube graph $C^d$ whose nodes
are index by binary vectors in $\{0, 1\}^d$ are the Walsh
functions $w_T: C^d \to \{-1, 1\}$ given by
.. math:: w_T(x_0, .., x_{d-1}) = (-1)^{\sum_{i \in T} x_i},
Expand All @@ -37,7 +38,7 @@ class WalshFunctions(EigenfunctionsWithAdditionTheorem):
certain discrete orthogonal polynomials called Kravchuk polynomials.
:param dim:
Dimension $d$ of the hypercube.
Dimension $d$ of the hypercube graph.
:param num_levels:
Specifies the number of levels of the Walsh functions.
Expand Down Expand Up @@ -162,7 +163,7 @@ def num_eigenfunctions_per_level(self) -> List[int]:
return [comb(self.dim, level) for level in range(self.num_levels)]


class Hypercube(DiscreteSpectrumSpace):
class HypercubeGraph(DiscreteSpectrumSpace):
r"""
The GeometricKernels space representing the d-dimensional hypercube graph
$C^d = \{0, 1\}^d$, the combinatorial space of binary vectors of length $d$.
Expand All @@ -173,15 +174,15 @@ class Hypercube(DiscreteSpectrumSpace):
.. note::
A tutorial on how to use this space is available in the
:doc:`Hypersphere.ipynb </examples/Hypercube>` notebook.
:doc:`HypercubeGraph.ipynb </examples/HypercubeGraph>` notebook.
.. note::
Since the degree matrix is a constant multiple of the identity, all
types of the graph Laplacian coincide on the hypercube up to a constant,
we choose the normalized Laplacian for numerical stability.
types of the graph Laplacian coincide on the hypercube graph up to a
constant, we choose the normalized Laplacian for numerical stability.
:param dim:
Dimension $d$ of the hypercube $C^d = \{0, 1\}^d$, a positive integer.
Dimension $d$ of the hypercube graph $C^d$, a positive integer.
.. admonition:: Citation
Expand All @@ -202,7 +203,7 @@ def dimension(self) -> int:
.. note:
Although this is a graph, and graphs are generally treated as
0-dimensional throughout GeometricKernels, we make an exception for
the hypercube. This is because it helps maintain good behavior of
HypercubeGraph. This is because it helps maintain good behavior of
Matérn kernels with the usual values of the smoothness parameter
nu, i.e. nu = 1/2, nu = 3/2, nu = 5/2.
"""
Expand Down Expand Up @@ -240,7 +241,7 @@ def get_repeated_eigenvalues(self, num: int) -> B.Numeric:

def random(self, key: B.RandomState, number: int) -> B.Numeric:
r"""
Sample uniformly random points on the hypercube $C^d = \{0, 1\}^d$.
Sample uniformly random points on the hypercube graph $C^d$.
Always returns [N, D] boolean array of the `key`'s backend.
Expand Down
8 changes: 4 additions & 4 deletions geometric_kernels/utils/special_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ def kravchuk_normalized(
.. math:: G_{d, j, m} = \sum_{T \subseteq \{0, .., d-1\}, |T| = j} w_T(x).
Here $w_T$ are the Walsh functions on the hypercube $C^d = \{0, 1\}^d$ and
Here $w_T$ are the Walsh functions on the hypercube graph $C^d$ and
$x \in C^d$ is an arbitrary binary vector with $m$ ones (the right-hand side
does not depend on the choice of a particular vector of the kind).
Expand Down Expand Up @@ -115,15 +115,15 @@ def kravchuk_normalized(
return (rhs_1 + rhs_2) / (d - j + 1)


def hypercube_heat_kernel(
def hypercube_graph_heat_kernel(
lengthscale: B.Numeric,
X: B.Numeric,
X2: Optional[B.Numeric] = None,
normalized_laplacian: bool = True,
):
"""
Analytic formula for the heat kernel on the hypercube, see Equation (14) in
:cite:t:`borovitskiy2023`.
Analytic formula for the heat kernel on the hypercube graph, see
Equation (14) in :cite:t:`borovitskiy2023`.
:param lengthscale:
The length scale of the kernel, an array of shape [1].
Expand Down
20 changes: 10 additions & 10 deletions notebooks/Hypercube.ipynb → notebooks/HypercubeGraph.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Matérn and Heat Kernels on the Hypercube Graph\n",
"# Matérn and Heat Kernels on HypercubeGraph\n",
"This notebook shows how define and evaluate kernels on the hypercube graph $C^d = \\{0, 1\\}^d$ for modeling data encoded as binary vectors with kernels that respect the geometry of the Hamming distance.\n",
"\n",
"At the very end of the notebook we also show how to construct *approximate finite-dimensional feature maps* for the kernels on the hypercube graph and how to use these to efficiently sample the Gaussian processes $\\mathrm{GP}(0, k)$.\n",
"\n",
"**Note:** the points on the hypercube $C^d$ are boolean vectors of size $d$ (`array`s of the suitable backend).\n",
"**Note:** the points on the hypercube graph $C^d$ are boolean vectors of size $d$ (`array`s of the suitable backend).\n",
"\n",
"We use the **numpy** backend here."
]
Expand Down Expand Up @@ -88,7 +88,7 @@
"# import geometric_kernels.jax\n",
"\n",
"# Import a space and an appropriate kernel.\n",
"from geometric_kernels.spaces import Hypercube\n",
"from geometric_kernels.spaces import HypercubeGraph\n",
"from geometric_kernels.kernels import MaternGeometricKernel\n",
"\n",
"# We use networkx to visualize graphs\n",
Expand All @@ -109,7 +109,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"First we create a GeometricKernels `space` that corresponds to the 6-dimensional hypercube $C^6 = \\{0, 1\\}^6$."
"First we create a GeometricKernels `space` that corresponds to the 6-dimensional hypercube graph $C^6 = \\{0, 1\\}^6$."
]
},
{
Expand All @@ -120,7 +120,7 @@
},
"outputs": [],
"source": [
"hypercube = Hypercube(6)"
"hypercube_graph = HypercubeGraph(6)"
]
},
{
Expand All @@ -136,12 +136,12 @@
"source": [
"First, we create a generic Matérn kernel.\n",
"\n",
"To initialize `MaternGeometricKernel` you just need to provide a `Space` object, in our case this is the `hypercube` we have just created above.\n",
"To initialize `MaternGeometricKernel` you just need to provide a `Space` object, in our case this is the `hypercube_graph` we have just created above.\n",
"\n",
"There is also an optional second parameter `num` which determines the order of approximation of the kernel (*number of levels*).\n",
"There is a sensible default value for each of the spaces in the library, so change it only if you know what you are doing.\n",
"\n",
"A brief account on theory behind the kernels on the Hypecube space can be found on this [documentation page](https://geometric-kernels.github.io/GeometricKernels/theory/hypercube.html)."
"A brief account on theory behind the kernels on the Hypecube space can be found on this [documentation page](https://geometric-kernels.github.io/GeometricKernels/theory/hypercube_graph.html)."
]
},
{
Expand All @@ -150,7 +150,7 @@
"metadata": {},
"outputs": [],
"source": [
"kernel = MaternGeometricKernel(hypercube)"
"kernel = MaternGeometricKernel(hypercube_graph)"
]
},
{
Expand Down Expand Up @@ -248,7 +248,7 @@
"source": [
"key = np.random.RandomState(1234)\n",
"\n",
"key, xs = hypercube.random(key, 3)\n",
"key, xs = hypercube_graph.random(key, 3)\n",
"\n",
"print(xs, xs.dtype)"
]
Expand Down Expand Up @@ -706,7 +706,7 @@
"source": [
"## Citation\n",
"\n",
"If you are using the Hypercube space and GeometricKernels, please consider citing\n",
"If you are using the HypercubeGraph space and GeometricKernels, please consider citing\n",
"\n",
"```\n",
"@article{mostowsky2024,\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@
from plum import Tuple

from geometric_kernels.kernels import MaternGeometricKernel
from geometric_kernels.spaces import Hypercube
from geometric_kernels.utils.special_functions import hypercube_heat_kernel
from geometric_kernels.spaces import HypercubeGraph
from geometric_kernels.utils.special_functions import hypercube_graph_heat_kernel
from geometric_kernels.utils.utils import (
binary_vectors_and_subsets,
chain,
Expand All @@ -18,13 +18,13 @@
def inputs(request) -> Tuple[B.Numeric]:
"""
Returns a tuple (space, eigenfunctions, X, X2) where:
- space is a Hypercube object with dimension equal to request.param,
- space is a HypercubeGraph object with dimension equal to request.param,
- eigenfunctions is the respective Eigenfunctions object with at most 5 levels,
- X is a random sample of random size from the space,
- X2 is another random sample of random size from the space.
"""
d = request.param
space = Hypercube(d)
space = HypercubeGraph(d)
eigenfunctions = space.get_eigenfunctions(min(space.dim + 1, 5))

key = np.random.RandomState()
Expand Down Expand Up @@ -164,7 +164,7 @@ def test_weighted_outerproduct_against_phi_product(inputs, backend):
weights = np.random.rand(num_levels, 1)
result = B.einsum("id,...nki->...nk", weights, sum_phi_phi_for_level)

# Check that `weighted_outerproduct`, which for the hypercube has a
# Check that `weighted_outerproduct`, which for HypercubeGraph has a
# dedicated implementation, returns the same result as the usual way of
# computing the `weighted_outerproduct` (based on the `phi_product` method).
check_function_with_backend(
Expand All @@ -182,7 +182,7 @@ def test_weighted_outerproduct_diag_against_phi_product(inputs, backend):
weights = np.random.rand(num_levels, 1)
result = B.einsum("id,ni->n", weights, phi_product_diag) # [N,]

# Check that `weighted_outerproduct_diag`, which for the hypercube has a
# Check that `weighted_outerproduct_diag`, which for HypercubeGraph has a
# dedicated implementation, returns the same result as the usual way of
# computing the `weighted_outerproduct_diag` (based on the
# `phi_product_diag` method).
Expand All @@ -196,13 +196,13 @@ def test_weighted_outerproduct_diag_against_phi_product(inputs, backend):
def test_against_analytic_heat_kernel(inputs, lengthscale, backend):
space, _, X, X2 = inputs
lengthscale = np.array([lengthscale])
result = hypercube_heat_kernel(lengthscale, X, X2)
result = hypercube_graph_heat_kernel(lengthscale, X, X2)

kernel = MaternGeometricKernel(space)

# Check that MaternGeometricKernel on the hypercube with nu=infinity
# Check that MaternGeometricKernel on HypercubeGraph with nu=infinity
# coincides with the closed form expression for the heat kernel on the
# hypercube.
# hypercube graph.
check_function_with_backend(
backend,
result,
Expand Down
18 changes: 9 additions & 9 deletions tests/utils/test_special_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@
import pytest
from sklearn.metrics.pairwise import rbf_kernel

from geometric_kernels.spaces import Hypercube
from geometric_kernels.spaces import HypercubeGraph
from geometric_kernels.utils.special_functions import (
hypercube_heat_kernel,
hypercube_graph_heat_kernel,
kravchuk_normalized,
walsh_function,
)
Expand Down Expand Up @@ -117,8 +117,8 @@ def test_kravchuk_precomputed(all_xs_and_combs, backend):
@pytest.mark.parametrize("d", [1, 5, 10])
@pytest.mark.parametrize("lengthscale", [1.0, 5.0, 10.0])
@pytest.mark.parametrize("backend", ["numpy", "tensorflow", "torch", "jax"])
def test_hypercube_heat_kernel(d, lengthscale, backend):
space = Hypercube(d)
def test_hypercube_graph_heat_kernel(d, lengthscale, backend):
space = HypercubeGraph(d)

key = np.random.RandomState()
N, N2 = key.randint(low=1, high=min(2**d, 10) + 1, size=2)
Expand All @@ -128,12 +128,12 @@ def test_hypercube_heat_kernel(d, lengthscale, backend):
gamma = -log(tanh(lengthscale**2 / 2))
result = rbf_kernel(X, X2, gamma=gamma)

# Checks that the heat kernel on the hypercube coincides with the RBF kernel
# Checks that the heat kernel on the hypercube graph coincides with the RBF
# restricted onto binary vectors, with appropriately redefined length scale.
check_function_with_backend(
backend,
result,
lambda lengthscale, X, X2: hypercube_heat_kernel(
lambda lengthscale, X, X2: hypercube_graph_heat_kernel(
lengthscale, X, X2, normalized_laplacian=False
),
np.array([lengthscale]),
Expand All @@ -148,10 +148,10 @@ def test_hypercube_heat_kernel(d, lengthscale, backend):
X_second = X[0:1, 3:]
X2_second = X2[0:1, 3:]

K_first = hypercube_heat_kernel(
K_first = hypercube_graph_heat_kernel(
np.array([lengthscale]), X_first, X2_first, normalized_laplacian=False
)
K_second = hypercube_heat_kernel(
K_second = hypercube_graph_heat_kernel(
np.array([lengthscale]), X_second, X2_second, normalized_laplacian=False
)

Expand All @@ -162,7 +162,7 @@ def test_hypercube_heat_kernel(d, lengthscale, backend):
check_function_with_backend(
backend,
result,
lambda lengthscale, X, X2: hypercube_heat_kernel(
lambda lengthscale, X, X2: hypercube_graph_heat_kernel(
lengthscale, X, X2, normalized_laplacian=False
),
np.array([lengthscale]),
Expand Down

0 comments on commit 2099a96

Please sign in to comment.