This is a collection of simple PyTorch implementation of various neural network architectures and layers. We will keep adding to this.
If you have any suggestions for other new implementations, please create a Github Issue.
Transformers module contains implementations for multi-headed attention and relative multi-headed attention.
✨ LSTM
pip install labml_nn
💬 Slack workspace for discussions_
If you use LabML for academic research, please cite the library using the following BibTeX entry.
@misc{labml,
author = {Varuna Jayasiri, Nipun Wijerathne},
title = {LabML: A library to organize machine learning experiments},
year = {2020},
url = {https://lab-ml.com/},
}