Skip to content

Commit cfeeef0

Browse files
authored
Merge pull request #208 from flash-algo/chore/sync-after-move
Chore/sync after move
2 parents 56a1cd1 + 5aa140b commit cfeeef0

File tree

7 files changed

+22
-22
lines changed

7 files changed

+22
-22
lines changed

CITATION.cff

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ cff-version: "1.2.0"
22
date-released: 2025-06
33
message: "If you use this software, please cite it using these metadata."
44
title: "Flash Sparse Attention: Trainable Dynamic Mask Sparse Attention"
5-
url: "https://github.com/SmallDoges/flash-sparse-attention"
5+
url: "https://github.com/flash-algo/flash-sparse-attention"
66
authors:
77
- family-names: Shi
88
given-names: Jingze

CONTRIBUTING.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Everyone is welcome to contribute, and we value everybody's contribution. Code c
44

55
It also helps us if you spread the word! Reference the library in blog posts about the awesome projects it made possible, shout out on Twitter every time it has helped you, or simply ⭐️ the repository to say thank you.
66

7-
However you choose to contribute, please be mindful and respect our [code of conduct](https://github.com/SmallDoges/flash-sparse-attention/blob/main/CODE_OF_CONDUCT.md).
7+
However you choose to contribute, please be mindful and respect our [code of conduct](https://github.com/flash-algo/flash-sparse-attention/blob/main/CODE_OF_CONDUCT.md).
88

99
## Ways to contribute
1010

@@ -16,7 +16,7 @@ There are several ways you can contribute to Flash-DMA:
1616
* Contribute to the examples, benchmarks, or documentation.
1717
* Improve CUDA kernel performance.
1818

19-
If you don't know where to start, there is a special [Good First Issue](https://github.com/SmallDoges/flash-sparse-attention/contribute) listing. It will give you a list of open issues that are beginner-friendly and help you start contributing to open-source.
19+
If you don't know where to start, there is a special [Good First Issue](https://github.com/flash-algo/flash-sparse-attention/contribute) listing. It will give you a list of open issues that are beginner-friendly and help you start contributing to open-source.
2020

2121
> All contributions are equally valuable to the community. 🥰
2222
@@ -81,14 +81,14 @@ You will need basic `git` proficiency to contribute to Flash-DMA. You'll need **
8181

8282
### Development Setup
8383

84-
1. Fork the [repository](https://github.com/SmallDoges/flash-sparse-attention) by clicking on the **Fork** button.
84+
1. Fork the [repository](https://github.com/flash-algo/flash-sparse-attention) by clicking on the **Fork** button.
8585

8686
2. Clone your fork to your local disk, and add the base repository as a remote:
8787

8888
```bash
8989
git clone https://github.com/<your Github handle>/flash-sparse-attention.git
9090
cd flash-sparse-attention
91-
git remote add upstream https://github.com/SmallDoges/flash-sparse-attention.git
91+
git remote add upstream https://github.com/flash-algo/flash-sparse-attention.git
9292
```
9393

9494
3. Create a new branch to hold your development changes:
@@ -157,7 +157,7 @@ You will need basic `git` proficiency to contribute to Flash-DMA. You'll need **
157157

158158
### Tests
159159

160-
An extensive test suite is included to test the library behavior and performance. Tests can be found in the [tests](https://github.com/SmallDoges/flash-sparse-attention/tree/main/tests) folder and benchmarks in the [benchmarks](https://github.com/SmallDoges/flash-sparse-attention/tree/main/benchmarks) folder.
160+
An extensive test suite is included to test the library behavior and performance. Tests can be found in the [tests](https://github.com/flash-algo/flash-sparse-attention/tree/main/tests) folder and benchmarks in the [benchmarks](https://github.com/flash-algo/flash-sparse-attention/tree/main/benchmarks) folder.
161161

162162
We use `pytest` for testing. From the root of the repository, run:
163163

@@ -200,6 +200,6 @@ If you discover a security vulnerability, please send an e-mail to the maintaine
200200

201201
## Questions?
202202

203-
If you have questions about contributing, feel free to ask in the [GitHub Discussions](https://github.com/SmallDoges/flash-sparse-attention/discussions) or open an issue.
203+
If you have questions about contributing, feel free to ask in the [GitHub Discussions](https://github.com/flash-algo/flash-sparse-attention/discussions) or open an issue.
204204

205205
Thank you for contributing to Flash Sparse Attention! 🚀

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
<!-- <div align="center">
2-
<img src="./assets/logo.png" alt="SmallDoges" width="100%">
2+
<img src="./assets/logo.png" alt="flash-algo" width="100%">
33
</div> -->
44

55
<div align="center">
@@ -67,7 +67,7 @@ pip install flash-sparse-attn --no-build-isolation
6767
Alternatively, you can compile and install from source:
6868

6969
```bash
70-
git clone https://github.com/SmallDoges/flash-sparse-attn.git
70+
git clone https://github.com/flash-algo/flash-sparse-attn.git
7171
cd flash-sparse-attn
7272
pip install . --no-build-isolation
7373
```
@@ -293,8 +293,8 @@ We welcome contributions from the community! FSA is an open-source project and w
293293

294294
### How to Contribute
295295

296-
- **Report bugs**: Found a bug? Please [open an issue](https://github.com/SmallDoges/flash_sparse_attn/issues/new/choose)
297-
- **Request features**: Have an idea for improvement? [Let us know](https://github.com/SmallDoges/flash_sparse_attn/issues/new/choose)
296+
- **Report bugs**: Found a bug? Please [open an issue](https://github.com/flash-algo/flash_sparse_attn/issues/new/choose)
297+
- **Request features**: Have an idea for improvement? [Let us know](https://github.com/flash-algo/flash_sparse_attn/issues/new/choose)
298298
- **Submit code**: Ready to contribute code? Check our [Contributing Guide](CONTRIBUTING.md)
299299
- **Improve docs**: Help us make the documentation better
300300

README_zh.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
<!-- <div align="center">
2-
<img src="./assets/logo.png" alt="SmallDoges" width="100%">
2+
<img src="./assets/logo.png" alt="flash-algo" width="100%">
33
</div> -->
44

55
<div align="center">
@@ -67,7 +67,7 @@ pip install flash-sparse-attn --no-build-isolation
6767
或者, 您可以从源代码编译和安装:
6868

6969
```bash
70-
git clone https://github.com/SmallDoges/flash-sparse-attn.git
70+
git clone https://github.com/flash-algo/flash-sparse-attn.git
7171
cd flash-sparse-attn
7272
pip install . --no-build-isolation
7373
```
@@ -292,8 +292,8 @@ python benchmarks/grad_equivalence.py
292292

293293
### 如何贡献
294294

295-
- **报告错误**: 发现了错误?请[提交 issue](https://github.com/SmallDoges/flash_sparse_attn/issues/new/choose)
296-
- **功能请求**: 有改进想法?[告诉我们](https://github.com/SmallDoges/flash_sparse_attn/issues/new/choose)
295+
- **报告错误**: 发现了错误?请[提交 issue](https://github.com/flash-algo/flash_sparse_attn/issues/new/choose)
296+
- **功能请求**: 有改进想法?[告诉我们](https://github.com/flash-algo/flash_sparse_attn/issues/new/choose)
297297
- **提交代码**: 准备贡献代码?查看我们的[贡献指南](CONTRIBUTING.md)
298298
- **改进文档**: 帮助我们完善文档
299299

SECURITY.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ If you discover a security vulnerability, please report it responsibly:
5050
- Include: Detailed description, reproduction steps, and potential impact
5151

5252
**For general bugs:**
53-
- Use our [GitHub Issues](https://github.com/SmallDoges/flash-sparse-attention/issues)
53+
- Use our [GitHub Issues](https://github.com/flash-algo/flash-sparse-attention/issues)
5454
- Follow our [contributing guidelines](CONTRIBUTING.md)
5555

5656
## Response Timeline
@@ -108,5 +108,5 @@ For security-related questions or concerns:
108108
- Project maintainers: See [AUTHORS](AUTHORS) file
109109

110110
For general support:
111-
- GitHub Issues: https://github.com/SmallDoges/flash-sparse-attention/issues
112-
- Documentation: https://github.com/SmallDoges/flash-sparse-attention/tree/main/docs/
111+
- GitHub Issues: https://github.com/flash-algo/flash-sparse-attention/issues
112+
- Documentation: https://github.com/flash-algo/flash-sparse-attention/tree/main/docs/

pyproject.toml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -40,9 +40,9 @@ classifiers = [
4040
]
4141

4242
[project.urls]
43-
Homepage = "https://github.com/SmallDoges/flash-sparse-attention"
44-
Source = "https://github.com/SmallDoges/flash-sparse-attention"
45-
Issues = "https://github.com/SmallDoges/flash-sparse-attention/issues"
43+
Homepage = "https://github.com/flash-algo/flash-sparse-attention"
44+
Source = "https://github.com/flash-algo/flash-sparse-attention"
45+
Issues = "https://github.com/flash-algo/flash-sparse-attention/issues"
4646

4747
[project.optional-dependencies]
4848
triton = [

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
PACKAGE_NAME = "flash_sparse_attn"
3838

3939
BASE_WHEEL_URL = (
40-
"https://github.com/SmallDoges/flash-sparse-attention/releases/download/{tag_name}/{wheel_name}"
40+
"https://github.com/flash-algo/flash-sparse-attention/releases/download/{tag_name}/{wheel_name}"
4141
)
4242

4343
# FORCE_BUILD: Force a fresh build locally, instead of attempting to find prebuilt wheels

0 commit comments

Comments
 (0)