🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
Updated
Sep 7, 2024 - Python
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.
Slicing a PyTorch Tensor Into Parallel Shards
Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*
JORA: JAX Tensor-Parallel LoRA Library (ACL 2024)
A distributed training framework for large language models powered by Lightning.
Tensor Parallelism with JAX + Shard Map
Fast and easy distributed model training examples.
Democratizing huggingface model training with InternEvo
Add a description, image, and links to the tensor-parallelism topic page so that developers can more easily learn about it.
To associate your repository with the tensor-parallelism topic, visit your repo's landing page and select "manage topics."