-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Labels
StaledocumentationImprovements or additions to documentationImprovements or additions to documentationenhancementNew feature or requestNew feature or requestgood first issueGood for newcomersGood for newcomersissue listA list of issues closely relatedA list of issues closely relatedjaxRelated to JAXRelated to JAXnumpyRelated to NumpyRelated to NumpypytorchRelated to PytorchRelated to PytorchtensorflowRelated to TensorflowRelated to TensorflowtestsRelated to testsRelated to tests
Description
A list of new attention layers to be added
You can pick any of the following frameworks and implement your layers:
To contribute, please:
- Create a new issue, copy an available (not closed and not opened by someone else) subtask name and paste it in the title followed by the reference of the task. (ex. subtask_name #source_issue_number). Make sure the subtask is not already opened by someone else. Another way is to hover over the subtasks and click on the
Open convert to issue - Copy and paste your new issue link here in the comments.
- Fork the repository
- Add your changes
- Create a pull request and mention your issue link
Please note that we use Numpy style for the docstrings and provide examples as much as possible!
Also please try to provide unit tests using Pytest
Whenever you need help with implementation or any other related issues you might be dealing with, please reach out to me via discussions or Discord server community @soran-ghaderi or @sigma1326
- Strided Attention
- Fixed Factorized Attention
- Additive Attention
- RAN
- RAM
- STN
- Temporal Attention
- Channel Attention
- Axial Attention
- Sliding Window Attention
- Global And Sliding Window Attention
- Dilated Sliding Window Attention
- Dynamic Convolution
- Content-Based Attention
- Global-Local Attention
- Attention Gate
- Class Attention
- Location-Based Attention
- Channel-Wise Soft Attention
- FAVOR+
- Disentangled Attention Mechanism
- Location Sensitive Attention
- LSH Attenention
- TAM
- SRM
- BAM
- Set Transformer
- Coordinate Attention
- BigBird
- Rendezvous
- Adaptive Masking
- DANet
- Bi-Attention
- RGA
- SEAM
- SPNet
- DMA
- GALA
- Neighborhood Attention
- Channel Squeeze And Spatial Excitation
- GCT
- Routing Attention
- Cross-Covariance Attention
- 3D SA
- Sparse Sinkhorn Attention
- Concurrent Spatial And Channel Squeeze And Excitation
- Deformable ConvNets
- SCA-CNN
- Channel And Spatial Attention
- Locally-Grouped Self-Attention
- Class ActivationGuided Attention Mechanism
- Factorized Dense Synthesized Attention
- HyperHyperNetwork
- ProCAN
- scSE
- MHMA
- Branch Attention
Metadata
Metadata
Assignees
Labels
StaledocumentationImprovements or additions to documentationImprovements or additions to documentationenhancementNew feature or requestNew feature or requestgood first issueGood for newcomersGood for newcomersissue listA list of issues closely relatedA list of issues closely relatedjaxRelated to JAXRelated to JAXnumpyRelated to NumpyRelated to NumpypytorchRelated to PytorchRelated to PytorchtensorflowRelated to TensorflowRelated to TensorflowtestsRelated to testsRelated to tests
Type
Projects
Status
Todo