Skip to content

Da1sypetals/ChebyKan-cuda-op

Repository files navigation

Chebyshev KAN CUDA implementation

References:

Note

There is a problem with backprop when replacing bmm with einsum see here, probably because I created tensors the wrong way. If anyone know how to fix that, please help, a lot of thanks.

Start

  1. Install
pip install -e .

Make sure the version of nvcc in PATH is compatible with your current PyTorch version (it seems minor version difference is OK).

  1. Run
  • Run test on MNIST;
python cheby_test.py
  • Or you can make your own net:
from fasterCuChebyKan.layer import ChebyKANLayer
import torch.nn as nn

class ChebyNet(nn.Module):
    def __init__(self):
        super(ChebyNet, self).__init__()
        self.layer1 = ChebyKANLayer(28*28, 256, 4)
        self.ln1 = nn.LayerNorm(256)
        self.layer2 = ChebyKANLayer(256, 10, 4)

    def forward(self, x):
        x = x.view(-1, 28*28)
        x = self.lalyer1(x)
        x = self.ln1(x)
        x = self.layer2(x)
        return x

About

CUDA implementation of custom op for Chebyshev KAN.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published