Skip to content

Commit e8b07d5

Browse files
committed
Comment out top logo div and enable Flash-DMA banner in README and README_zh
1 parent 89241a6 commit e8b07d5

File tree

2 files changed

+6
-6
lines changed

2 files changed

+6
-6
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
<div align="center">
1+
<!-- <div align="center">
22
<img src="./assets/logo.png" alt="SmallDoges" width="100%">
3-
</div>
3+
</div> -->
44

55
<div align="center">
66

@@ -10,7 +10,7 @@
1010
</div>
1111

1212

13-
<!-- ![Flash-DMA Banner](assets/flash_dmattn_banner.png) -->
13+
![Flash-DMA Banner](assets/flash_dmattn_banner.png)
1414

1515
Flash-DMA is a high-performance attention implementation that integrates Flash Attention's memory efficiency with Dynamic Mask Attention's sparse computation capabilities for processing extremely long sequences in transformer models.
1616

README_zh.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
<div align="center">
1+
<!-- <div align="center">
22
<img src="./assets/logo.png" alt="SmallDoges" width="100%">
3-
</div>
3+
</div> -->
44

55
<div align="center">
66

@@ -10,7 +10,7 @@
1010
</div>
1111

1212

13-
<!-- ![Flash-DMA Banner](assets/flash_dmattn_banner.png) -->
13+
![Flash-DMA Banner](assets/flash_dmattn_banner.png)
1414

1515
Flash-DMA 是一个高性能的注意力实现,将 Flash Attention 的内存效率与动态掩码注意力的稀疏计算能力相结合,用于在 Transformer 模型中处理超长序列。
1616

0 commit comments

Comments
 (0)