Skip to content

New attention layers #44

@soran-ghaderi

Description

@soran-ghaderi

A list of new attention layers to be added

You can pick any of the following frameworks and implement your layers:
Tensorflow
Pytorch
JAX
Numpy

To contribute, please:

  1. Create a new issue, copy an available (not closed and not opened by someone else) subtask name and paste it in the title followed by the reference of the task. (ex. subtask_name #source_issue_number). Make sure the subtask is not already opened by someone else. Another way is to hover over the subtasks and click on the Open convert to issue
  2. Copy and paste your new issue link here in the comments.
  3. Fork the repository
  4. Add your changes
  5. Create a pull request and mention your issue link

Please note that we use Numpy style for the docstrings and provide examples as much as possible!
Also please try to provide unit tests using Pytest

Whenever you need help with implementation or any other related issues you might be dealing with, please reach out to me via discussions or Discord server community @soran-ghaderi or @sigma1326

  • Strided Attention
  • Fixed Factorized Attention
  • Additive Attention
  • RAN
  • RAM
  • STN
  • Temporal Attention
  • Channel Attention
  • Axial Attention
  • Sliding Window Attention
  • Global And Sliding Window Attention
  • Dilated Sliding Window Attention
  • Dynamic Convolution
  • Content-Based Attention
  • Global-Local Attention
  • Attention Gate
  • Class Attention
  • Location-Based Attention
  • Channel-Wise Soft Attention
  • FAVOR+
  • Disentangled Attention Mechanism
  • Location Sensitive Attention
  • LSH Attenention
  • TAM
  • SRM
  • BAM
  • Set Transformer
  • Coordinate Attention
  • BigBird
  • Rendezvous
  • Adaptive Masking
  • DANet
  • Bi-Attention
  • RGA
  • SEAM
  • SPNet
  • DMA
  • GALA
  • Neighborhood Attention
  • Channel Squeeze And Spatial Excitation
  • GCT
  • Routing Attention
  • Cross-Covariance Attention
  • 3D SA
  • Sparse Sinkhorn Attention
  • Concurrent Spatial And Channel Squeeze And Excitation
  • Deformable ConvNets
  • SCA-CNN
  • Channel And Spatial Attention
  • Locally-Grouped Self-Attention
  • Class ActivationGuided Attention Mechanism
  • Factorized Dense Synthesized Attention
  • HyperHyperNetwork
  • ProCAN
  • scSE
  • MHMA
  • Branch Attention

Metadata

Metadata

Assignees

No one assigned

    Labels

    StaledocumentationImprovements or additions to documentationenhancementNew feature or requestgood first issueGood for newcomersissue listA list of issues closely relatedjaxRelated to JAXnumpyRelated to NumpypytorchRelated to PytorchtensorflowRelated to TensorflowtestsRelated to tests

    Type

    No type

    Projects

    Status

    Todo

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions