Skip to content

Commit d747e46

Browse files
committed
rope links
1 parent cd03271 commit d747e46

File tree

3 files changed

+5
-0
lines changed

3 files changed

+5
-0
lines changed

labml_nn/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@
2222
* [Transformer building blocks](transformers/models.html)
2323
* [Transformer XL](transformers/xl/index.html)
2424
* [Relative multi-headed attention](transformers/xl/relative_mha.html)
25+
* [Rotary Positional Embeddings](transformers/rope/index.html)
2526
* [Compressive Transformer](transformers/compressive/index.html)
2627
* [GPT Architecture](transformers/gpt/index.html)
2728
* [GLU Variants](transformers/glu_variants/simple.html)

labml_nn/transformers/__init__.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,9 @@
2222
This implements Transformer XL model using
2323
[relative multi-head attention](xl/relative_mha.html)
2424
25+
## [Rotary Positional Embeddings](rope/index.html)
26+
This implements Rotary Positional Embeddings (RoPE)
27+
2528
## [Compressive Transformer](compressive/index.html)
2629
2730
This is an implementation of compressive transformer

readme.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ implementations almost weekly.
2424
* [Transformer building blocks](https://nn.labml.ai/transformers/models.html)
2525
* [Transformer XL](https://nn.labml.ai/transformers/xl/index.html)
2626
* [Relative multi-headed attention](https://nn.labml.ai/transformers/xl/relative_mha.html)
27+
* [Rotary Positional Embeddings](https://nn.labml.ai/transformers/rope/index.html)
2728
* [Compressive Transformer](https://nn.labml.ai/transformers/compressive/index.html)
2829
* [GPT Architecture](https://nn.labml.ai/transformers/gpt/index.html)
2930
* [GLU Variants](https://nn.labml.ai/transformers/glu_variants/simple.html)

0 commit comments

Comments
 (0)