Skip to content

Conversation

@hypnopump
Copy link

This PR implements some high level features such as:

  • Interface to call torch.nn.Linear() the kernel way.
  • Reproducible conda environment for pykeops in Linux-gpu
  • Naive implementation for several activation functions
  • Simple MH Attention Layer (probably to move as an example?)

Projected roadmap:

  • Provide interface for simple MLPs
  • Optimized activation functions (to do once the engine is rewritten)

Proposals:

  • Element-wise maximum and minimum operator for multiple tensors : max([0,1], [1,0]) = [1,1]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant