Skip to content

Commit 6c094e7

Browse files
committed
formatting
1 parent 0693cff commit 6c094e7

File tree

1 file changed

+21
-23
lines changed

1 file changed

+21
-23
lines changed

README.md

Lines changed: 21 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
# MegaFold
1+
# MegaFold: System-Level Optimizations for Accelerating Protein Structure Prediction Models
22

33
## About
44

5-
MegaFold is a cross-platform system to accelerate protein structure prediction models (e.g., AlphaFold3, AlphaFold2).
5+
[MegaFold](TODO:add arxiv link) is a cross-platform system to accelerate protein structure prediction models (e.g., AlphaFold3, AlphaFold2).
66

77
Why MegaFold?
88

@@ -11,27 +11,24 @@ Why MegaFold?
1111
- **Memory reduction**: Reduces peak memory during training by up to 1.23x
1212
- **Sequence length extension**: Enables training on 1.35x longer sequence lengths
1313

14-
Paper: [arxiv](TODO:add arxiv link)
15-
1614

1715
## Usage
1816

19-
<h3>alphafold3-pytorch</h3>
17+
### alphafold3-pytorch
2018
The `alphafold3-pytorch` folder includes AF3 training code (baseline and end-to-end MegaFold integrations) and instructions to reproduce our paper results. More details in `alphafold3-pytorch/README.md`.
2119

22-
23-
<details>
24-
<summary><h3>Data-loading optimizations</h3></summary>
20+
---
21+
### Data-loading optimizations
2522
The file `alphafold3-pytorch/omnifold/inputs.py` includes the data pipeline and implementation details for the ahead-of-time cache-based data loading optimizations.
2623

2724
You can find details on deterministic input features cache in lines 4536-4553 and on MSA features cache in lines 4670-4732.
2825

29-
</details>
30-
26+
---
27+
### FusedEvoAttention
28+
The folder `FusedEvoAttention` includes source code of FusedEvoAttention kernel.
3129

3230
<details>
33-
<summary><h3>FusedEvoAttention</h3></summary>
34-
The folder `FusedEvoAttention` includes source code of FusedEvoAttention kernel.
31+
<summary>Expand for step-by-step guide</summary>
3532

3633
#### Step 1: Import
3734

@@ -43,7 +40,7 @@ from evoformer import TritonEvoformer
4340

4441
`FusedEvoAttention` supports 4 main types of EvoAttention in AlphaFold models, shown in the below examples. For accuracy, you need to adjust your inputs to their suggested shapes before passing in. Acronyms: `N_seq` is the MSA depth; `N_res` is the input sequence length.
4542

46-
**1. Single Attention with Pair Bias**
43+
**a. Single Attention with Pair Bias**
4744

4845
```
4946
# Q, K, V: [Batch, 1, N_res, Head, Dim]
@@ -52,7 +49,7 @@ from evoformer import TritonEvoformer
5249
out = TritonEvoformer(Q, K, V, mask, pair_bias)
5350
```
5451

55-
**2. Triangle Attention (around starting node and around ending node)**
52+
**b. Triangle Attention (around starting node and around ending node)**
5653

5754
```
5855
# Q, K, V: [Batch, N_res, N_res, Head, Dim]
@@ -61,7 +58,7 @@ out = TritonEvoformer(Q, K, V, mask, pair_bias)
6158
out = TritonEvoformer(Q, K, V, mask, pair_bias)
6259
```
6360

64-
**3. MSA Row-wise Attention**
61+
**c. MSA Row-wise Attention**
6562

6663
```
6764
# Q, K, V: [Batch, N_seq, N_res, Head, Dim]
@@ -70,15 +67,14 @@ out = TritonEvoformer(Q, K, V, mask, pair_bias)
7067
out = TritonEvoformer(Q, K, V, mask, pair_bias)
7168
```
7269

73-
**4. MSA Column-wise Attention**
70+
**d. MSA Column-wise Attention**
7471

7572
```
7673
# Q, K, V: [Batch, N_res, N_seq, Head, Dim]
7774
# mask: [Batch, N_seq, 1, 1, N_res]
7875
out = TritonEvoformer(Q, K, V, mask)
7976
```
8077

81-
---
8278

8379
#### Step 3: Autotuning for optimal performance
8480

@@ -99,10 +95,12 @@ TRITON_PRINT_AUTOTUNING=1 python your_script.py
9995

10096
</details>
10197

98+
---
99+
### FusedLayernormLinear
100+
The folder `FusedLayernormLinear` includes source code of fused layernorm-linear kernel.
102101

103102
<details>
104-
<summary><h3>FusedLayernormLinear</h3></summary>
105-
The folder `FusedLayernormLinear` includes source code of fused layernorm-linear kernel.
103+
<summary>Expand for step-by-step guide</summary>
106104

107105
#### Step 1: Import
108106

@@ -131,10 +129,12 @@ FusedLayernormLinear fuses sequential `LayerNorm` and `Linear` layers. You can r
131129

132130
</details>
133131

132+
---
133+
### FusedTransition
134+
The folder `FusedTransition` includes source code of FusedTransition kernel.
134135

135136
<details>
136-
<summary><h3>FusedTransition</h3></summary>
137-
The folder `FusedTransition` includes source code of FusedTransition kernel.
137+
<summary>Expand for step-by-step guide</summary>
138138

139139
#### Step 1: Import
140140

@@ -152,13 +152,11 @@ from fused_transition import FusedTransition
152152
+ transition = FusedTransition(dim=dim, expansion_factor=expansion_factor)
153153
```
154154

155-
156155
- **NOTE**: `FusedTransition` relies on FusedLayernormLinear for its expanding projections. Make sure you read FusedLayernormLinear's usage guide above.
157156

158157
</details>
159158

160159

161-
162160
## Citation
163161

164162
```

0 commit comments

Comments
 (0)